Learn R Programming

oolong (version 0.6.1)

Create Validation Tests for Automated Content Analysis

Description

Intended to create standard human-in-the-loop validity tests for typical automated content analysis such as topic modeling and dictionary-based methods. This package offers a standard workflow with functions to prepare, administer and evaluate a human-in-the-loop validity test. This package provides functions for validating topic models using word intrusion, topic intrusion (Chang et al. 2009, ) and word set intrusion (Ying et al. 2021) tests. This package also provides functions for generating gold-standard data which are useful for validating dictionary-based methods. The default settings of all generated tests match those suggested in Chang et al. (2009) and Song et al. (2020) .

Copy Link

Version

Install

install.packages('oolong')

Monthly Downloads

248

Version

0.6.1

License

LGPL (>= 2.1)

Issues

Pull Requests

Stars

Forks

Maintainer

Chung-hong Chan

Last Published

April 15th, 2024

Functions in oolong (0.6.1)

abstracts

Abstracts of communication journals dataset
export_oolong

Export a deployable Shiny app from an oolong object into a directory
print.oolong_gold_standard

Print oolong gold standard object
check_oolong

Check whether the oolong needs to be updated
create_oolong

Generate an oolong test
deploy_oolong

Deploy an oolong test
clone_oolong

Clone an oolong object
summarize_oolong

Summarize oolong objects
print.oolong_summary

Print and plot oolong summary
revert_oolong

Obtain a locked oolong from a downloaded data file
update_oolong

Update an oolong object to the latest version
afinn

AFINN dictionary
trump2k

Trump's tweets dataset
newsgroup_nb

Naive Bayes model trained on 20 newsgroups data
abstracts_seededlda

Topic models trained with the abstracts dataset.