Provides an interface to the UMAP algorithm implemented in Python.
umap(data, include_input = TRUE, n_neighbors = 15L,
n_components = 2L, metric = "euclidean", n_epochs = NULL,
learning_rate = 1, alpha = 1, init = "spectral", spread = 1,
min_dist = 0.1, set_op_mix_ratio = 1, local_connectivity = 1L,
repulsion_strength = 1, bandwidth = 1, gamma = 1,
negative_sample_rate = 5L, transform_queue_size = 4, a = NULL,
b = NULL, random_state = NULL, metric_kwds = dict(),
angular_rp_forest = FALSE, target_n_neighbors = -1L,
target_metric = "categorical", target_metric_kwds = dict(),
target_weight = 0.5, transform_seed = 42L, verbose = FALSE)
data frame or matrix. input data.
logical. Attach input data to UMAP embeddings if desired.
integer. The size of local neighborhood (in terms of number of neighboring sample points) used for manifold approximation. Larger values result in more global views of the manifold, while smaller values result in more local data being preserved. In general values should be in the range 2 to 100.
integer The dimension of the space to embed into. This defaults to 2 to provide easy visualization, but can reasonably be set to any integer value in the range 2 to 100.
character. The metric to use to compute distances in high dimensional space. If a string is passed it must match a valid predefined metric. If a general metric is required a function that takes two 1d arrays and returns a float can be provided. For performance purposes it is required that this be a numba jit'd function. Valid string metrics include: euclidean, manhattan, chebyshev, minkowski, canberra, braycurtis, mahalanobis, wminkowski, seuclidean, cosine, correlation, haversine, hamming, jaccard, dice, russelrao, kulsinski, rogerstanimoto, sokalmichener, sokalsneath, yule. Metrics that take arguments (such as minkowski, mahalanobis etc.) can have arguments passed via the metric_kwds dictionary. At this time care must be taken and dictionary elements must be ordered appropriately; this will hopefully be fixed in the future.
integer The number of training epochs to use in optimization.
numeric. The initial learning rate for the embedding optimization.
numeric. The initial learning rate for the embedding optimization.
character. How to initialize the low dimensional embedding. Options are: 'spectral' (use a spectral embedding of the fuzzy 1-skeleton), 'random' (assign initial embedding positions at random), * A numpy array of initial embedding positions.
numeric. The effective scale of embedded points. In combination with ``min_dist`` this determines how clustered/clumped the embedded points are.
numeric. The effective minimum distance between embedded points. Smaller values will result in a more clustered/clumped embedding where nearby points on the manifold are drawn closer together, while larger values will result on a more even dispersal of points. The value should be set relative to the ``spread`` value, which determines the scale at which embedded points will be spread out.
numeric. Interpolate between (fuzzy) union and intersection as the set operation used to combine local fuzzy simplicial sets to obtain a global fuzzy simplicial sets. Both fuzzy set operations use the product t-norm. The value of this parameter should be between 0.0 and 1.0; a value of 1.0 will use a pure fuzzy union, while 0.0 will use a pure fuzzy intersection.
integer The local connectivity required -- i.e. the number of nearest neighbors that should be assumed to be connected at a local level. The higher this value the more connected the manifold becomes locally. In practice, this should be not more than the local intrinsic dimension of the manifold.
numeric. Weighting applied to negative samples in low dimensional embedding optimization. Values higher than one will result in greater weight being given to negative samples.
numeric. The effective bandwidth of the kernel if we view the algorithm as similar to Laplacian eigenmaps. Larger values induce more connectivity and a more global view of the data, smaller values concentrate more locally.
numeric. Weighting applied to negative samples in low dimensional embedding optimization. Values higher than one will result in greater weight being given to negative samples.
numeric. The number of negative edge/1-simplex samples to use per positive edge/1-simplex sample in optimizing the low dimensional embedding.
numeric. For transform operations (embedding new points using a trained model_ this will control how aggressively to search for nearest neighbors. Larger values will result in slower performance but more accurate nearest neighbor evaluation.
numeric. More specific parameters controlling the embedding. If NULL, these values are set automatically as determined by ``min_dist`` and ``spread``.
numeric. More specific parameters controlling the embedding. If NULL, these values are set automatically as determined by ``min_dist`` and ``spread``.
integer. If integer, random_state is the seed used by the random number generator; If NULL, the random number generator is the RandomState instance used by `np.random`.
reticulate dictionary. Arguments to pass on to the metric, such as the ``p`` value for Minkowski distance.
logical. Whether to use an angular random projection forest to initialise the approximate nearest neighbor search. This can be faster, but is mostly on useful for metric that use an angular style distance such as cosine, correlation etc. In the case of those metrics angular forests will be chosen automatically.
integer. The number of nearest neighbors to use to construct the target simplcial set. If set to -1 use the n_neighbors value.
character or function. The metric used to measure distance for a target array is using supervised dimension reduction. By default this is <U+2018>categorical<U+2019> which will measure distance in terms of whether categories match or are different. Furthermore, if semi-supervised is required target values of -1 will be trated as unlabelled under the <U+2018>categorical<U+2019> metric. If the target array takes continuous values (e.g. for a regression problem) then metric of <U+2018>l1<U+2019> or <U+2018>l2<U+2019> is probably more appropriate.
reticulate dictionary. Keyword argument to pass to the target metric when performing supervised dimension reduction. If None then no arguments are passed on.
numeric. weighting factor between data topology and target topology. A value of 0.0 weights entirely on data, a value of 1.0 weights entirely on target. The default of 0.5 balances the weighting equally between data and target.
integer. Random seed used for the stochastic aspects of the transform operation. This ensures consistency in transform operations.
logical. Controls verbosity of logging.
matrix
Leland McInnes and John Healy (2018). UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. ArXiv e-prints 1802.03426.
# NOT RUN {
#import umap library (and load python module)
library("umapr")
umap(as.matrix(iris[, 1:4]))
umap(iris[, 1:4])
# }
Run the code above in your browser using DataLab