Learn R Programming

rNOMADS (version 2.5.3)

WebCrawler: Get web pages

Description

Discover all links on a given web page, follow each one, and recursively scan every link found. Return a list of web addresses whose pages contain no links.

Usage

WebCrawler(url, depth = NULL, verbose = TRUE)

Value

urls.out

A list of web page addresses, each of which corresponds to a model instance.

Arguments

url

A URL to scan for links.

depth

How many links to return. This avoids having to recursively scan hundreds of links. Defaults to NULL, which returns everything.

verbose

Print out each link as it is discovered. Defaults to TRUE.

Author

Daniel C. Bowman danny.c.bowman@gmail.com

Details

CrawlModels uses this function to get all links present on a model page.

See Also

CrawlModels, ParseModelPage

Examples

Run this code

#Find the first 10 model runs for the 
#GFS 0.5x0.5 model

if (FALSE) urls.out <- WebCrawler(
"http://nomads.ncep.noaa.gov/cgi-bin/filter_gfs_0p50.pl", depth = 10)

Run the code above in your browser using DataLab