Imports a dataset and does the necessary transformations to get the right
column formats. Unless specified otherwise, the function will set the
timezone of the data to UTC
. It will also enforce an Id
to separate
different datasets and will order/arrange the dataset within each Id
by
Datetime. See the Details and Devices section for more information and the
full list of arguments.
import_Dataset(device, ...)import
Tibble/Dataframe with a POSIXct column for the datetime
An object of class list
of length 18.
From what device do you want to import? For a few devices,
there is a sample data file that you can use to test the function (see the
examples). See supported_devices()
for a list of supported devices and see
below for more information on devices with specific requirements.
Parameters that get handed down to the specific import functions
The set of import functions provide a convenient way to
import light logger data that is then perfectly formatted to add metadata,
make visualizations and analyses. There are a number of devices supported,
where import should just work out of the box. To get an overview, you can
simply call the supported_devices()
dataset. The list will grow
continuously as the package is maintained.
supported_devices()
#> [1] "ActLumus" "ActTrust" "Actiwatch_Spectrum"
#> [4] "Actiwatch_Spectrum_de" "Circadian_Eye" "DeLux"
#> [7] "GENEActiv_GGIR" "Kronowise" "LIMO"
#> [10] "LYS" "LiDo" "LightWatcher"
#> [13] "MotionWatch8" "OcuWEAR" "Speccy"
#> [16] "SpectraWear" "VEET" "nanoLambda"
Manufacturer: Condor Instruments
Model: ActLumus
Implemented: Sep 2023
A sample file is provided with the package, it can be accessed through
system.file("extdata/205_actlumus_Log_1020_20230904101707532.txt.zip", package = "LightLogR")
. It does not need to be unzipped to be imported.
This sample file is a good example for a regular dataset without gaps
Manufacturer: LYS Technologies
Model: LYS Button
Implemented: Sep 2023
A sample file is provided with the package, it can be accessed through
system.file("extdata/sample_data_LYS.csv", package = "LightLogR")
. This
sample file is a good example for an irregular dataset.
Manufacturer: Philips Respironics
Model: Actiwatch Spectrum
Implemented: Nov 2023 / July 2024
Important note: The Actiwatch_Spectrum
function is for an international/english formatting. The Actiwatch_Spectrum_de
function is for a german formatting, which slightly differs in the datetime format, the column names, and the decimal separator.
Manufacturer: Condor Instruments
Model: ActTrust1, ActTrust2
Implemented: Mar 2024
This function works for both ActTrust 1 and 2 devices
Manufacturer: Monash University
Model: Speccy
Implemented: Feb 2024
Manufacturer: Intelligent Automation Inc
Model: DeLux
Implemented: Dec 2023
Manufacturer: University of Lucerne
Model: LiDo
Implemented: Nov 2023
Manufacturer: University of Manchester
Model: SpectraWear
Implemented: May 2024
Manufacturer: NanoLambda
Model: XL-500 BLE
Implemented: May 2024
Manufacturer: Object-Tracker
Model: LightWatcher
Implemented: June 2024
Manufacturer: Meta Reality Labs
Model: VEET
Implemented: July 2024
Required Argument: modality
A character scalar describing the
modality to be imported from. Can be one of "ALS"
(Ambient light sensor),
"IMU"
(Inertial Measurement Unit), "INF"
(Information),
"PHO"
(Spectral Sensor), "TOF"
(Time of Flight)
Manufacturer: Max-Planck-Institute for Biological Cybernetics, Tübingen
Model: melanopiQ Circadian Eye (Prototype)
Implemented: July 2024
Manufacturer: Kronohealth
Model: Kronowise
Implemented: July 2024
Manufacturer: Activeinsights
Model: GENEActiv
Note: This import function takes GENEActiv data that was preprocessed
through the GGIR package.
By default, GGIR
aggregates light data into intervals of 15 minutes. This
can be set by the windowsizes
argument in GGIR, which is a three-value
vector, where the second values is set to 900 seconds by default. To import
the preprocessed data with LightLogR
, the filename
argument requires a
path to the parent directory of the GGIR output folders, specifically the
meta
folder, which contains the light exposure data. Multiple filename
s
can be specified, each of which needs to be a path to a different GGIR
parent directory. GGIR exports can contain data from multiple participants,
these will always be imported fully by providing the parent directory. Use
the pattern
argument to extract sensible Id
s from the .RData
filenames within the meta/basic/ folder. As per the author,
Dr. Vincent van Hees, GGIR preprocessed data are
always in local time, provided the desiredtz
/configtz
are properly set
in GGIR. LightLogR
still requires a timezone to be set, but will not
timeshift the import data.
Manufacturer: CamNtech
Implemented: September 2024
Manufacturer: ENTPE
Implemented: September 2024
LIMO exports LIGHT
data and IMU
(inertia measurements, also UV) in separate files. Both can be read in with this function, but not at the same time. Please decide what type of data you need and provide the respective filenames.
Manufacturer: Ocutune
Implemented: September 2024
OcuWEAR data contains spectral data. Due to the format of the data file, the spectrum is not directly part of the tibble, but rather a list column of tibbles within the imported data, containing a Wavelength
(nm) and Intensity
(mW/m^2) column.
To import a file, simple specify the filename (and path) and feed it to the
import_Dataset
function. There are sample datasets for all devices.
The import functions provide a basic overview of the data after import, such as the intervals between measurements or the start and end dates.
filepath <- system.file("extdata/sample_data_LYS.csv", package = "LightLogR")
dataset <- import_Dataset("LYS", filepath, auto.plot = FALSE)
#>
#> Successfully read in 11'422 observations across 1 Ids from 1 LYS-file(s).
#> Timezone set is UTC.
#> The system timezone is Europe/Berlin. Please correct if necessary!
#>
#> First Observation: 2023-06-21 00:00:12
#> Last Observation: 2023-06-22 23:59:48
#> Timespan: 2 days
#>
#> Observation intervals:
#> Id interval.time n pct
#> 1 sample_data_LYS 15s 10015 87.689%
#> 2 sample_data_LYS 16s 1367 11.969%
#> 3 sample_data_LYS 17s 23 0.201%
#> 4 sample_data_LYS 18s 16 0.140%
Import functions can also be called directly:
filepath <- system.file("extdata/205_actlumus_Log_1020_20230904101707532.txt.zip", package = "LightLogR")
dataset <- import$ActLumus(filepath, auto.plot = FALSE)
#>
#> Successfully read in 61'016 observations across 1 Ids from 1 ActLumus-file(s).
#> Timezone set is UTC.
#> The system timezone is Europe/Berlin. Please correct if necessary!
#>
#> First Observation: 2023-08-28 08:47:54
#> Last Observation: 2023-09-04 10:17:04
#> Timespan: 7.1 days
#>
#> Observation intervals:
#> Id interval.time n pct
#> 1 205_actlumus_Log_1020_20230904101707532.txt 10s 61015 100%
dataset %>% gg_days()
dataset %>%
dplyr::select(Datetime, TEMPERATURE, LIGHT, MEDI, Id) %>%
dplyr::slice(1500:1505)
#> # A tibble: 6 x 5
#> # Groups: Id [1]
#> Datetime TEMPERATURE LIGHT MEDI Id
#> <dttm> <dbl> <dbl> <dbl> <fct>
#> 1 2023-08-28 12:57:44 26.9 212. 202. 205_actlumus_Log_1020_20230904101~
#> 2 2023-08-28 12:57:54 26.9 208. 199. 205_actlumus_Log_1020_20230904101~
#> 3 2023-08-28 12:58:04 26.9 205. 196. 205_actlumus_Log_1020_20230904101~
#> 4 2023-08-28 12:58:14 26.8 204. 194. 205_actlumus_Log_1020_20230904101~
#> 5 2023-08-28 12:58:24 26.9 203. 194. 205_actlumus_Log_1020_20230904101~
#> 6 2023-08-28 12:58:34 26.8 204. 195. 205_actlumus_Log_1020_20230904101~
There are specific and a general import function. The general import
function is described below, whereas the specific import functions take the
form of import$device()
. The general import function is a thin wrapper
around the specific import functions. The specific import functions take
the following arguments:
filename
: Filename(s) for the Dataset. Can also contain the filepath,
but path
must then be NULL
. Expects a character
. If the vector is
longer than 1
, multiple files will be read in into one Tibble.
path
: Optional path for the dataset(s). NULL
is the default. Expects
a character
.
n_max
: maximum number of lines to read. Default is Inf
.
tz
: Timezone of the data. "UTC"
is the default. Expects a
character
. You can look up the supported timezones with OlsonNames()
.
Id.colname
: Lets you specify a column for the id of a dataset. Expects a
symbol (Default is Id
). This column will be used for grouping
(dplyr::group_by()
).
auto.id
: If the Id.colname
column is not part of the dataset
, the Id
can be automatically extracted from the filename. The argument expects a
regular expression regex and will by default just give the whole filename
without file extension.
manual.id
: If this argument is not NULL
, and no Id
column is part
of the dataset
, this character
scalar will be used. We discourage the
use of this arguments when importing more than one file
silent
: If set to TRUE
, the function will not print a summary message
of the import or plot the overview. Default is FALSE
.
locale
: The locale controls defaults that vary from place to place.
dst_adjustment
: If a file crosses daylight savings time, but the device does not adjust time stamps accordingly, you can set this argument to TRUE
, to apply this shift manually. It is selective, so it will only be done in files that cross between DST and standard time. Default is FALSE
. Uses dst_change_handler()
to do the adjustment. Look there for more infos. It is not equipped to handle two jumps in one file (so back and forth between DST and standard time), but will work fine if jums occur in separate files.
auto.plot
: a logical on whether to call gg_overview()
after import. Default is TRUE
. But is set to FALSE
if the argument silent
is set to TRUE
.
...
: supply additional arguments to the readr import functions, like na
. Might also be used to supply arguments to the specific import functions, like column_names
for Actiwatch_Spectrum
devices. Those devices will always throw a helpful error message if you forget to supply the necessary arguments.
If the Id
column is already part of the dataset
it will just use this
column. If the column is not present it will add this column and fill it
with the filename of the importfile (see param auto.id
).
print_n
can be used if you want to see more rows from the observation intervals
remove_duplicates
can be used if identical observations are present within or across multiple files. The default is FALSE
. The function keeps only unique observations (=rows) if set to' TRUE'. This is a convenience implementation of dplyr::distinct()
.
supported_devices