Learn R Programming

⚠️There's a newer version (1.3.1) of this package.Take me there.

hardhat

Introduction

hardhat is a developer focused package designed to ease the creation of new modeling packages, while simultaneously promoting good R modeling package standards as laid out by the set of opinionated Conventions for R Modeling Packages.

hardhat has four main goals:

  • Easily, consistently, and robustly preprocess data at fit time and prediction time with mold() and forge().

  • Provide one source of truth for common input validation functions, such as checking if new data at prediction time contains the same required columns used at fit time.

  • Provide extra utility functions for additional common tasks, such as adding intercept columns, standardizing predict() output, and extracting valuable class and factor level information from the predictors.

  • Reimagine the base R preprocessing infrastructure of stats::model.matrix() and stats::model.frame() using the stricter approaches found in model_matrix() and model_frame().

The idea is to reduce the burden of creating a good modeling interface as much as possible, and instead let the package developer focus on writing the core implementation of their new model. This benefits not only the developer, but also the user of the modeling package, as the standardization allows users to build a set of “expectations” around what any modeling function should return, and how they should interact with it.

Installation

You can install the released version of hardhat from CRAN with:

install.packages("hardhat")

And the development version from GitHub with:

# install.packages("devtools")
devtools::install_github("tidymodels/hardhat")

Learning more

To learn about how to use hardhat, check out the vignettes:

  • vignette("mold", "hardhat"): Learn how to preprocess data at fit time with mold().

  • vignette("forge", "hardhat"): Learn how to preprocess new data at prediction time with forge().

  • vignette("package", "hardhat"): Learn how to use mold() and forge() to help in creating a new modeling package.

You can also watch Max Kuhn discuss how to use hardhat to build a new modeling package from scratch at the XI Jornadas de Usuarios de R conference here.

Contributing

This project is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.

Copy Link

Version

Install

install.packages('hardhat')

Monthly Downloads

157,568

Version

1.1.0

License

MIT + file LICENSE

Issues

Pull Requests

Stars

Forks

Maintainer

Last Published

June 10th, 2022

Functions in hardhat (1.1.0)

hardhat-extract

Generics for object extraction
default_formula_blueprint

Default formula blueprint
default_recipe_blueprint

Default recipe blueprint
hardhat-package

hardhat: Construct Modeling Packages
importance_weights

Importance weights
is_blueprint

Is x a preprocessing blueprint?
new_formula_blueprint

Create a new preprocessing blueprint
add_intercept_column

Add an intercept column to data
contr_one_hot

Contrast function for one-hot encodings
frequency_weights

Frequency weights
model_matrix

Construct a design matrix
model_offset

Extract a model offset
run-mold

mold() according to a blueprint
get_data_classes

Extract data classes from a data frame or matrix
standardize

Standardize the outcome
new_default_formula_blueprint

Create a new default blueprint
is_case_weights

Is x a case weights vector?
is_importance_weights

Is x an importance weights vector?
model_frame

Construct a model frame
new_case_weights

Extend case weights
scream

<U+0001F631> Scream.
default_xy_blueprint

Default XY blueprint
refresh_blueprint

Refresh a preprocessing blueprint
new_frequency_weights

Construct a frequency weights vector
run-forge

forge() according to a blueprint
delete_response

Delete the response from a terms object
validate_predictors_are_numeric

Ensure predictors are all numeric
update_blueprint

Update a preprocessing blueprint
is_frequency_weights

Is x a frequency weights vector?
validate_no_formula_duplication

Ensure no duplicate terms appear in formula
weighted_table

Weighted table
validate_outcomes_are_binary

Ensure that the outcome has binary factors
tune

Mark arguments for tuning
validate_column_names

Ensure that data contains required column names
new_importance_weights

Construct an importance weights vector
new_model

Constructor for a base model
validate_outcomes_are_univariate

Ensure that the outcome is univariate
get_levels

Extract factor levels from a data frame
validate_prediction_size

Ensure that predictions have the correct number of rows
shrink

Subset only required columns
modeling-package

Create a modeling package
hardhat-example-data

Example data for hardhat
validate_outcomes_are_factors

Ensure that the outcome has only factor columns
validate_outcomes_are_numeric

Ensure outcomes are all numeric
spruce

Spruce up predictions
mold

Mold data for modeling
extract_ptype

Extract a prototype
forge

Forge prediction-ready data