thedata <- importKCL(site = "my1", year = 2009, pollutant =
"all", met = FALSE, units = "mass")
site =
c("my1", "kc1")
--- to import Marylebone Road and North Kensignton for
example.year = 1990:2000
. To import several
specfic years use year = c(1990, 1995, 2000)
for example.pollutant = c("nox", "no2")
.FALSE
. If TRUE
wind speed (m/s), wind
direction (degrees), solar radiation and rain amount are
available. See details below.
Access to reliable aunits = "volume"
to use ppb etc. PM10_raw TEOM data are
multiplied by 1.3 and PM2.5 have no correction applied. SeeimportKCL
function is currently released as a
test version. NOTE - Information is required on the site codes for
easy use.importKCL
function has been written to make it easy
to import data from the King's College London air pollution
networks. KCL have provided .RData files (R workspaces) of all
individual sites and years for the KCL networks. These files are updated on a
weekly basis. This approach requires a link to the Internet to work.
There are several advantages over the web portal approach where .csv
files are downloaded. First, it is quick to select a range of sites,
pollutants and periods (see examples below). Second, storing the data as
.RData objects is very efficient as they are about four times smaller
than .csv files --- which means the data downloads quickly and saves
bandwidth. Third, the function completely avoids any need for data
manipulation or setting time formats, time zones etc. Finally, it is
easy to import many years of data beyond the current limit of about
64,000 lines. The final point makes it possible to download several long
time series in one go. The function also has the advantage that the
proper site name is imported and used in openair
functions.
The site codes and pollutant names can be upper or lower case. The
function will issue a warning when data less than six months old is
downloaded, which may not be ratified.
The data are imported by stacking sites on top of one another and will
have field names date
, site
, code
(the site code) and
pollutant(s). Sometimes it is useful to have columns of site
data. This can be done using the reshape
function --- see
examples below.
The function imports two measures of PM10 where
available. PM10_raw
are TEOM measurements with a 1.3 factor
applied to take account of volatile losses. The PM10
data is a
current best estimate of a gravimetric equivalent measure as described
below.
For the assessment of the EU Limit Values, PM10 needs to be measured
using the reference method or one shown to be equivalent to the
reference method. Defra carried out extensive trials between 2004 and
2006 to establish which types of particulate analysers in use in the UK
were equivalent. These trials found that measurements made using
Partisol, FDMS, BAM and SM200 instruments were shown to be equivalent to
the PM10 reference method. However, correction factors need to be
applied to measurements from the SM200 and BAM instruments. Importantly,
the TEOM was demonstrated as not being equivalent to the reference
method due to the loss of volatile PM, even when the 1.3 correction
factor was applied. The Volatile Correction Model (VCM) was developed
for Defra at King's to allow measurements of PM10 from TEOM instruments
to be converted to reference equivalent; it uses the measurements of
volatile PM made using nearby FDMS instruments to correct the
measurements made by the TEOM. It passed the equivalence testing using
the same methodology used in the Defra trials and is now the recommended
method for correcting TEOM measurements (Defra, 2009). VCM correction of
TEOM measurements can only be applied after 1st January 2004, when
sufficiently widespread measurements of volatile PM became
available. The 1.3 correction factor is now considered redundant for
measurements of PM10 made after 1st January 2004. Further information
on the VCM can be found at
importKCL
), now report PM10 results as reference equivalent. For
PM10 measurements made by BAM and SM200 analysers the applicable
correction factors have been applied. For measurements from TEOM
analysers the 1.3 factor has been applied up to 1st January 2004, then
the VCM method has been used to convert to reference equivalent.
The meteolrogical are meant to represent 'typical' conditions in London,
but users may prefer to use their own data. The data provide a an
estimate of general meteorological conditions across Greater
London. For meteorological species (wd, ws, rain, solar) each data
point is formed by averaging measurements from a subset of LAQN
monitoring sites that have been identified as having minimal
disruption from local obstacles and a long term reliable dataset. The
exact sites used varies between species, but include between two and
five sites per species. Therefore, the data should represent 'London
scale' meteorology, rather than local conditions.
While the function is being developed, the following site codes should
help with selection. We will also make available other meta data such as
site type and location to make it easier to select sites based on other
information. Note that these codes need to be refined because only the
common species are available for export currently i.e. NOx, NO2, O3, CO,
SO2, PM10, PM2.5.
importAURN
, importADMS
## import all pollutants from Marylebone Rd from 1990:2009
mary <- importKCL(site = "my1", year = 2000:2009)
## import nox, no2, o3 from Marylebone Road and North Kensignton for 2000
thedata <- importKCL(site = c("my1", "kc1"), year = 2000,
pollutant = c("nox", "no2", "o3"))
## import met data too...
my1 <- importKCL(site = "my1", year = 2008, met = TRUE)
## reshape the data so that each column represents a pollutant/site
thedata <- importKCL(site = c("my1", "kc1"), year = 2008,
pollutant = "o3")
thedata <- reshape(thedata, idvar = "date", timevar = "code", direction
= "wide")
## thedata now has columns o3.MY1 and o3.KC1
## can also get rid of columns, in this case site:
thedata <- reshape(thedata, idvar = "date", timevar = "code", direction
= "wide", drop = "site")
## now can export as a csv file:
write.csv(thedata, file = "~/temp/thedata.csv")
Run the code above in your browser using DataLab