Learn R Programming

stream (version 2.0-1)

stream_pipeline: Create a Data Stream Pipeline

Description

Define a complete data stream pipe line consisting of a data stream, filters and a data mining task using %>%.

Usage

DST_Runner(dsd, dst)

Arguments

dsd

A data stream (subclass of DSD) typically provided using a %>% (pipe).

dst

A data stream mining task (subclass of DST).

Author

Michael Hahsler

Details

A data stream pipe line consisting of a data stream, filters and a data mining task:

DSD %>% DSF %>% DST_Runner

Once the pipeline is defined, it can be run using update() where points are taken from the DSD data stream source, filtered through a sequence of DSF filters and then used to update the DST task.

DST_Multi can be used to update multiple models in the pipeline with the same stream.

See Also

Other DST: DSAggregate(), DSClassifier(), DSC(), DSOutlier(), DSRegressor(), DST_SlidingWindow(), DST_WriteStream(), DST(), evaluate, predict(), update()

Examples

Run this code
set.seed(1500)

# Set up a pipeline with a DSD data source, DSF Filters and then a DST task
cluster_pipeline <- DSD_Gaussians(k = 3, d = 2) %>%
                    DSF_Scale() %>%
                    DST_Runner(DSC_DBSTREAM(r = .3))

cluster_pipeline

# the DSD and DST can be accessed directly
cluster_pipeline$dsd
cluster_pipeline$dst

# update the DST using the pipeline, by default update returns the micro clusters
update(cluster_pipeline, n = 1000)

cluster_pipeline$dst
get_centers(cluster_pipeline$dst, type = "macro")
plot(cluster_pipeline$dst)

Run the code above in your browser using DataLab