Skip to main content
Mathematics LibreTexts

1.6.8.2: Classification without learning

  • Page ID
    32653
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    We see that plotting of multivariate data always has two problems: either there are too many elements (e.g., in parallel coordinates) which are hard to understand, or there is a need of some grouping operation (e.g., median or range) which will result in the lost of information. What will be really helpful is to safely process the data first, for example, to reduce dimensions—from many to 2 or 3. These techniques are described in this section.

    Apart from (a) reduction of dimensionality (projection pursuit), the following methods help to (b) find groups (clusters) in data, (c) discover hidden factors (latent variables) and understand variable importance (feature selection\(^{[1]}\)), (d) recognize objects (e.g., complicated shapes) within data, typically using densities and hiatus (gaps) in multidimensional space, and (e) unmix signals.

    Screen Shot 2019-01-23 at 9.45.11 PM.png Figure \(\PageIndex{1}\) Static 3D cloud plot of iris data with several rotations.

    Classification with primary data

    Primary is what come directly from observation, and did not yet processes in any way (to make secondary data).

    Shadows of hyper clouds: PCA

    RGL (see above) allows to find the best projection manually, with a mouse. However, it is possible to do programmatically, with principal component analysis, PCA. It belongs to the family of non-supervised methods, methods of classification without learning, or ordination.

    PCA treats the data as points in the virtual multidimensional space where every dimension is the one character. These points make together the multidimensional cloud. The goal of the analysis is to find a line which crosses this cloud along its most elongated part, like pear on the stick (Figure \(\PageIndex{2}\)). This is the first principal component. The second line is perpendicular to the first and again span the second most elongated part of the cloud. These two lines make the plane on which every point is projected.

    Screen Shot 2019-01-23 at 9.46.47 PM.png Figure \(\PageIndex{2}\) Principal component analysis is like the pear on the stick.

    Let us prove this practically. We will load the two-dimensional (hence only two principal components) black and white pear image and see what PCA does with it:

    Code \(\PageIndex{1}\) (R):

    library(png) # package to read PNG images
    aa <- readPNG("data/pear2d_bw.png")
    bb <- which(aa == 0, arr.ind=TRUE) # pixels to coordinates
    ## plot together original (green) and PCA-rotated:
    bbs <- scale(bb)
    pps <- scale(prcomp(bb)$x[, 1:2]) # only two PCs anyway
    xx <- range(c(bbs[, 1], pps[, 1]))
    yy <- range(c(bbs[, 2], pps[, 2]))
    plot(pps, pch=".", col=adjustcolor("black", alpha=0.5), xlim=xx, ylim=yy)
    points(bbs, pch=".", col=adjustcolor("green", alpha=0.5))
    legend("bottomright", fill=adjustcolor(c("green", "black"), alpha=0.5), legend=c("Original", "PCA-rotated"), bty="n", border=0)

    PCA is related with a task of finding the “most average person”. The simple combination of averages will not work, which is well explained in Todd Rose’s “The End of Average” book. However, it is usually possible to find in the hyperspace the configuration of parameters which will suit most of people, and this is what PCA is for.

    Screen Shot 2019-01-23 at 9.48.23 PM.png Figure \(\PageIndex{3}\) Shadow of the pear: how PCA projects the image.

    After the PCA procedure, all columns (characters) are transformed into components, and the most informative component is the first, next is the second, then third etc. The number of components is the same as the number of initial characters but first two or three usually include all necessary information. This is why it is possible to use them for 2D visualization of multidimensional data. There are many similarities between PCA and factor analysis (which is out of the scope of this book).

    At first, we will use an example from the open repository presenting measurements of four different populations of sedges:

    Code \(\PageIndex{2}\) (R):

    ca <- read.table("http://ashipunov.info/shipunov/open/carex.txt", h=TRUE)
    str(ca)
    ca.pca <- princomp(scale(ca[,-1]))

    (Function scale() standardizes all variables.)

    The following (Figure \(\PageIndex{4}\)) plot is technical screeplot which shows the relative importance of each component:

    Code \(\PageIndex{3}\) (R):

    ca <- read.table("http://ashipunov.info/shipunov/open/carex.txt", h=TRUE)
    ca.pca <- princomp(scale(ca[,-1]))
    plot(ca.pca, main="")
    Screen Shot 2019-01-23 at 9.50.15 PM.png Figure \(\PageIndex{4}\) Plot showing the importance of each component.

    Here it is easy to see that among four components (same number as initial characters), two first have the highest importances. There is a way to have the same without plotting:

    Code \(\PageIndex{4}\) (R):

    ca <- read.table("http://ashipunov.info/shipunov/open/carex.txt", h=TRUE)
    ca.pca <- princomp(scale(ca[,-1]))
    summary(ca.pca)

    First two components together explain about 84% percents of the total variance.

    Visualization of PCA is usually made using scores from PCA model (Figure \(\PageIndex{5}\)):

    Code \(\PageIndex{5}\) (R):

    ca <- read.table("http://ashipunov.info/shipunov/open/carex.txt", h=TRUE)
    ca.pca <- princomp(scale(ca[,-1]))
    ca.p <- ca.pca$scores[, 1:2]
    plot(ca.p, type="n", xlab="PC1", ylab="PC2")
    text(ca.p, labels=ca[, 1], col=ca[, 1])
    Hulls(ca.p, ca[, 1]) # asmisc.r
    Screen Shot 2019-01-23 at 9.52.18 PM.png Figure \(\PageIndex{5}\) Diversity of sedges on the plot of two first principal components.

    (Last command draws hulls which help to conclude that first sedges from the third population are intermediate between first and second, they might be even hybrids. If there are three, not two, components which are most important, then any of 3D plots like scatterplot3d() explained above, will help to visualize them.)

    It is tempting to measure the intersection between hulls. This is possible with Overlap() function, which in turn loads PBSmapping package:

    Code \(\PageIndex{6}\) (R):

    ca <- read.table("http://ashipunov.info/shipunov/open/carex.txt", h=TRUE)
    ca.pca <- princomp(scale(ca[,-1]))
    ca.p <- ca.pca$scores[, 1:2]
    ca.h <- Hulls(ca.p, ca[, 1], plot=FALSE) # asmisc.r
    ca.o <- Overlap(ca.h) # asmisc.r
    summary(ca.o)

    Sometimes, PCA results are useful to present as a biplot (Figure \(\PageIndex{6}\)):

    Code \(\PageIndex{7}\) (R):

    ca <- read.table("http://ashipunov.info/shipunov/open/carex.txt", h=TRUE)
    ca.pca <- princomp(scale(ca[,-1]))
    biplot(ca.pca, xlabs=ca[, 1])
    Screen Shot 2019-01-23 at 9.54.03 PM.png Figure \(\PageIndex{6}\) Biplot shows the load of each character into two first components.

    Biplot helps to understand visually how large is the load of each initial character into first two components. For example, characters of height and spike length (but spike width) have a biggest loads into the first component which distinguishes populations most. Function loadings() allows to see this information in the numerical form:

    Code \(\PageIndex{8}\) (R):

    ca <- read.table("http://ashipunov.info/shipunov/open/carex.txt", h=TRUE)
    ca.pca <- princomp(scale(ca[,-1]))
    loadings(ca.pca)

    R has two variants of PCA calculation, first (already discussed) with princomp(), and second with prcomp(). The difference lays in the way how exactly components are calculated. First way is traditional, but second is recommended:

    Code \(\PageIndex{9}\) (R):

    iris.pca <- prcomp(iris[, 1:4], scale=TRUE)
    iris.pca$rotation
    iris.p <- iris.pca$x[, 1:2]
    plot(iris.p, type="n", xlab="PC1", ylab="PC2")
    text(iris.p, labels=abbreviate(iris[, 5], 1, method="both.sides"), col=as.numeric(iris[, 5]))
    Ellipses(iris.p[, 1:2], as.numeric(iris[, 5])) # asmisc.r

    Example above shows some differences between two PCA methods. First, prcomp() conveniently accepts scale option. Second, loadings are taken from the rotation element. Third, scores are in the the element with x name. Please run the code yourself to see how to add 95% confidence ellipses to the 2D ordination plot. One might see that Iris setosa (letter “s” on the plot) is seriously divergent from two other species, Iris versicolor (“v”) and Iris virginica (“a”).

    Packages ade4 and vegan offer many variants of PCA (Figure \(\PageIndex{7}\)):

    Code \(\PageIndex{10}\) (R):

    library(ade4)
    iris.dudi <- dudi.pca(iris[, 1:4], scannf=FALSE)
    s.class(iris.dudi$li, iris[, 5])
    Screen Shot 2019-01-23 at 9.55.51 PM.png Figure \(\PageIndex{7}\) Diversity of irises on the plot of two first principal components (ade4 package)

    (The plot is similar to the shown on Figure \(\PageIndex{5}\); however, the differences between groups are here more clear.)

    In addition, this is possible to use the inferential approach for the PCA:

    Code \(\PageIndex{11}\) (R):

    iris.between <- bca(iris.dudi, iris[, 5], scannf=FALSE)
    randtest(iris.between)

    Monte-Carlo randomization allows to understand numerically how well are Iris species separated with this PCA. The high Observation value (72.2% which is larger than 50%) is the sign of reliable differences.

    There are other variants of permutation tests for PCA, for example, with anosim() from the vegan package.

    Please note that principal component analysis is in general a linear technique similar to the analysis of correlations, and it can fail in some complicated cases.

    Data solitaire: SOM

    There are several other techniques which allow unsupervised classification of primary data. Self-organizing maps (SOM) is a technique somewhat similar to breaking the deck of cards into several piles:

    Code \(\PageIndex{12}\) (R):

    library(kohonen)
    iris.som <- som(scale(iris[, 1:4]), grid = somgrid(3, 3, "hexagonal")) # 9 "piles"
    predict(iris.som)$unit.classif # content of each "pile"
    iris.som$distances # distance to the "center of pile"
    set.seed(1)
    plot(iris.som, main="")
    oldpar <- par(new=TRUE, oma=c(2, rep(0, 3)))
    plot(iris.som, type="mapping", col=iris$Species, main="", border=0)
    par(oldpar)
    legend("top", col=1:3, pch=1, legend=levels(iris$Species))

    The resulted plot (Figure \(\PageIndex{8}\)) contains graphical representation of character values, together with the placement of actual data points.

    Screen Shot 2019-01-23 at 9.58.24 PM.png Figure \(\PageIndex{8}\) Self-organizing map for iris data. Both character values (codes) and data placement is shown.

    In fact, SOM is the non-learning neural network. More advanced Growing Neural Gas (GNG) algorithm uses ideas similar to SOM.

    Data density: t-SNE

    With the really big number of samples, t-SNE algorithm (name stands for “t-Distributed Stochastic Neighbor Embedding”) performs better than classical PCA. t-SNE is frequently used for the shape recognition. It is easy enough to employ it in R(Figure \(\PageIndex{9}\)):

    Code \(\PageIndex{13}\) (R):

    library(Rtsne)
    iris.unique <- unique(iris)
    set.seed(42)
    tsne.out <- Rtsne(as.matrix(iris.unique[, 1:4]))
    SP <- iris.unique$Species
    plot(tsne.out$Y, col=SP, pch=14+as.numeric(SP), xlab="", ylab="")
    legend("topleft", pch=14+1:nlevels(SP), col=1:nlevels(SP), legend=levels(SP))
    Screen Shot 2019-01-23 at 9.59.32 PM.png Figure \(\PageIndex{9}\) t-SNE algorithm splits the iris data.

    Classification with correspondence

    Correspondence analysis is the family of techniques similar to PCA, but applicable to categorical data (primary or in contingency tables). Simple variant of the correspondence analysis is implemented in corresp() from MASS package (Figure \(\PageIndex{10}\)) which works with contingency tables:

    Code \(\PageIndex{14}\) (R):

    HE <- margin.table(HairEyeColor, 1:2)
    HE.df <- Table2df(HE) # asmisc.r
    biplot(corresp(HE.df, nf = 2), xpd=TRUE)
    legend("topleft", fill=1:2, legend=c("hair","eyes"))

    (We converted here “table” object HE into the data frame. xpd=TRUE was used to allow text to go out of the plotting box.)

    Screen Shot 2019-01-23 at 10.01.00 PM.png Figure \(\PageIndex{10}\) Correspondence plot of contingency table.

    This example uses HairEyeColor data from previous chapter. Plot visualizes both parameters so if the particular combination of colors is more frequent, then positions of corresponding words is closer. For example, black hairs and brown eyes frequently occur together. The position of these words is more distant from the center (designated with cross) because numerical values of these characters are remote.

    This possibility to visualize several character sets simultaneously on the one plot is the impressive feature of correspondence analysis (Figure \(\PageIndex{11}\)):

    Code \(\PageIndex{15}\) (R):

    library(vegan)
    alla <- read.table("data/lakesea_abio.txt", sep="	", h=TRUE)
    allc <- read.table("data/lakesea_bio.txt", sep="	", h=TRUE)
    all.cca <- cca(allc, alla[,-14])
    plot.all.cca <- plot(all.cca, display=c("sp","cn"), xlab="", ylab="")
    points(all.cca, display="sites", pch=ifelse(alla[, 14], 15, 0))
    legend("topleft", pch=c(15, 0), legend=c("lake","sea"))
    text(-1.6, -4.2, "Carex.lasiocarpa", pos=4)

    This is much more advanced than biplot. Data used here contained both abiotic (ecotopes) and biotic factors (plant species), plus the geography of some Arctic islands: were these lake islands or sea islands. The plot was able to arrange all of these data: for abiotic factors, it used arrows, for biotic—pluses, and for sites (islands themselves as characterized by the sum of all available factors, biotic and abiotic)—squares of different color, depending on geographic origin. All pluses could be identified with the interactive identify(plot.all.cca, "species") command. We did it just for one most outstanding species, Carex lasiocarpa (woolly-fruit sedge) which is clearly associated with lake islands, and also with swamps.

    Classification with distances

    Important way of non-supervised classification is to work with distances instead of original data. Distance-based methods need the dissimilarities between each pair of objects to be calculated first. Advantage of these methods is that dissimilarities could be calculated from data of any type: measurement, ranked or nominal.

    Distances

    There are myriads of ways to calculate dissimilarity (or similarity which is essentially the reverse dissimilarity)\(^{[2]}\). One of these ways already explained above is a (reverse absolute) correlation. Other popular ways are Euclidean (square) distance and Manhattan (block) distance. Both of them (Figure \(\PageIndex{12}\)) are useful for measurement variables.

    Screen Shot 2019-01-23 at 10.03.47 PM.png Figure \(\PageIndex{11}\) Canonical correlation analysis plot showing Arctic islands (squares), species (crosses) and habitat factors (arrows)

    Manhattan distances are similar to driving distances, especially when there are not many roads available. The example below are driving distances between biggest North Dakota towns:

    Code \(\PageIndex{16}\) (R):

    nd <- read.table("data/nd.txt", h=TRUE, sep="	", row.names=1)
    nd.d <- as.dist(nd)
    str(nd.d)
    Screen Shot 2019-01-23 at 10.07.34 PM.png Figure \(\PageIndex{12}\) Euclidean (1) and Manhattan (2) distances between A and B

    In most cases, we need to convert raw variables into distance matrix. The basic way is to use dist(). Note that ranked and binary variables usually require different approaches which are implemented in the vegan (function vegdist()) and cluster packages (function daisy()). The last function recognizes the type of variable and applies the most appropriate metric (including the universal Gower distance); it also accepts the metric specified by user:

    Code \(\PageIndex{17}\) (R):

    library(cluster)
    iris.dist <- daisy(iris[, 1:4], metric="manhattan")
    str(iris.dist)

    In biology, one can use Smirnov taxonomic distances, available from smirnov package. In the following example, we use plant species distribution data on small islands.

    The next plot intends to help the reader to understand them better. It is just a kind of map which shows geographical locations and sizes of islands:

    Code \(\PageIndex{18}\) (R):

    mp <- read.table( "http://ashipunov.info/shipunov/open/moldino_l.txt", h=TRUE, sep="	", row.names=1)
    library(MASS)
    eqscplot(mp$LON, mp$LAT, cex=round(log(mp$SQUARE))-5.5, axes=FALSE, xlab="", ylab="", xpd=T)
    text(mp$LON, mp$LAT, labels=row.names(mp), pos=4, offset=1, cex=.9)

    (Please plot it yourself.)

    Now we will calculate and visualize Smirnov’s distances:

    Code \(\PageIndex{19}\) (R):

    mo <- read.table("http://ashipunov.info/shipunov/open/moldino.txt", h=TRUE, sep="	", row.names=1)
    m1 <- t((mo > 0) * 1) # convert to occurrence 0/1 data and transpose
    library(smirnov)
    m1.Txy <- smirnov(m1)
    m1.s <- (1 - m1.Txy) # similarity to dissimilarity
    dimnames(m1.s) <- list(row.names(m1))
    symnum(m1.s)

    Smirnov’s distances have an interesting feature: instead of 0 or 1, diagonal of the similarity matrix is filled with the coefficient of uniqueness values (Txx):

    Code \(\PageIndex{20}\) (R):

    mo <- read.table("http://ashipunov.info/shipunov/open/moldino.txt", h=TRUE, sep="	", row.names=1)
    m1 <- t((mo > 0) * 1) # convert to occurrence 0/1 data and transpose
    library(smirnov)
    m1.Txy <- smirnov(m1)
    m1.Txx <- diag(m1.Txy)
    names(m1.Txx) <- row.names(m1)
    rev(sort(round(m1.Txx, 3)))

    This means that Verik island is a most unique in regards to plant species occurrence.

    Making maps: multidimensional scaling

    There are many things to do with the distance matrix. One of most straightforward is the multidimensional scaling, MDS (the other name is “principal coordinate analysis”, PCoA):

    Code \(\PageIndex{21}\) (R):

    nd <- read.table("data/nd.txt", h=TRUE, sep="	", row.names=1)
    nd.d <- as.dist(nd)
    nd.c <- cmdscale(nd.d)
    new.names <- sub("y C", "y
    C", row.names(nd))
    library(MASS)
    eqscplot(nd.c, type="n", axes=FALSE, xlab="", ylab="")
    points(nd.c, pch=19)
    text(nd.c, labels=new.names, xpd=TRUE, pos=3, cex=0.8)
    Screen Shot 2019-01-23 at 10.09.34 PM.png Figure \(\PageIndex{13}\) It is not a map of North Dakota towns but the plot of cmdscale() output from the driving distance data.

    Compare the plot (Figure \(\PageIndex{13}\)) it with any geographical map. If you do not have a map of North Dakota but have these driving distances, cmdscale() allows to re-create the map!

    So in essence, MDS is a task reverse to navigation (finding driving directions from map): it uses “driving directions” and makes a map from them.

    Another, less impressive but more useful example (Figure \(\PageIndex{14}\)) is from raw data of Fisher’s irises:

    Code \(\PageIndex{22}\) (R):

    library(KernSmooth)
    library(cluster)
    iris.dist <- daisy(iris[, 1:4], metric="manhattan")
    iris.c <- cmdscale(iris.dist)
    est <- bkde2D(iris.c, bandwidth=c(.7, 1.5))
    plot(iris.c, type="n", xlab="Dim. 1", ylab="Dim. 2")
    text(iris.c, labels=abbreviate(iris[, 5], 1, method="both.sides"))
    contour(est$x1, est$x2, est$fhat, add=TRUE, drawlabels=FALSE, lty=3)
    Screen Shot 2019-01-23 at 10.12.01 PM.png Figure \(\PageIndex{14}\) The result of the multidimensional scaling of the iris. data. Visualization uses the estimation of density.

    (There is no real difference from PCA because metric multidimensional scaling is related to principal component analysis; also, the internal structure of data is the same.)

    To make the plot “prettier”, we added here density lines of point closeness estimated with bkde2D() function from the KernSmooth package. Another way to show density is to plot 3D surface like (Figure \(\PageIndex{15}\)):

    Code \(\PageIndex{23}\) (R):

    library(cluster)
    iris.dist <- daisy(iris[, 1:4], metric="manhattan")
    iris.c <- cmdscale(iris.dist)
    est <- bkde2D(iris.c, bandwidth=c(.7, 1.5))
    persp(est$x1, est$x2, est$fhat, theta=135, phi=45, col="purple3", shade=0.75, border=NA, xlab="Dim. 1", ylab="Dim. 2", zlab="Density")
    Screen Shot 2019-01-23 at 10.13.58 PM.png Figure \(\PageIndex{15}\) 3D density surface of multidimensionally scaled iris data.

    In addition to cmdscale(), MASS package (functions isoMDS() and sammon()) implements the non-metric multidimensional scaling, and package vegan has the advanced non-metric metaMDS(). Non-metric multidimensional scaling does not have analogs to PCA loadings (importances of variables) and proportion of variance explained by component, but it is possible to calculate surrogate metrics:

    Code \(\PageIndex{24}\) (R):

    iris.dist2 <- dist(iris[, 1:4], method="manhattan") ## to remove zero distances:
    iris.dist2[iris.dist2 == 0] <- abs(jitter(0))
    library(MASS)
    iris.m <- isoMDS(iris.dist2)
    cor(iris[, 1:4], iris.m$points) # MDS loadings surrogate
    MDSv(iris.m$points) # asmisc.r

    Consequently (and similarly to PCA), sepal width character influences second dimension much more than three other characters. We can also guess that within this non-metric solution, first dimension takes almost 98% of variance.

    Making trees: hierarchical clustering

    The other way to process the distance matrix is to perform hierarchical clustering which produces dendrograms, or trees, which are “one and a half dimensional” plots (Figure \(\PageIndex{16}\)):

    Code \(\PageIndex{25}\) (R):

    aa <- read.table("data/atmospheres.txt", h=TRUE, sep="	", row.names=1)
    aa.dist <- dist(t(aa)) # because planets are columns
    aa.hclust <- hclust(aa.dist, method="ward.D")
    plot(aa.hclust, xlab="", ylab="", sub="")

    Ward’s method of clustering is well known to produce sharp, well-separated clusters (this, however, might lead to false conclusions if data has no apparent structure). Distant planets are most similar (on the height \(\approx25\)), similarity between Venus and Mars is also high (dissimilarity is \(\approx0\)). Earth is more outstanding, similarity with Mercury is lower, on the height \(\approx100\); but since Mercury has no true atmosphere, it could be ignored.

    The following classification could be produced from this plot:

    • Earth group: Venus, Mars, Earth, [Mercury]
    • Jupiter group: Jupiter, Saturn, Uranus, Neptune

    Instead of this “speculative” approach, one can use cutree() function to produce classification explicitly; this requires the hclust() object and number of desired clusters:

    Code \(\PageIndex{26}\) (R):

    library(cluster)
    iris.dist <- daisy(iris[, 1:4], metric="manhattan")
    iris.hclust <- hclust(iris.dist)
    iris.3 <- cutree(iris.hclust, 3)
    head(iris.3) # show cluster numbers
    Misclass(iris.3, iris[, 5]) # asmisc.r
    Screen Shot 2019-01-23 at 10.19.02 PM.png Figure \(\PageIndex{16}\) Dendrogram reflecting similarities between atmospheres of Solar system planets.

    To check how well the selected method performs classification, we wrote the custom function Misclass(). This function calculates the confusion matrix. Please note that Misclass() assumes predicted and observed groups in the same order, see also below for fanny() function results.

    Confusion matrix is a simple way to assess the predictive power of the model. More advanced technique of same sort is called cross-validation. As an example, user might splut data into 10 equal parts (e.g., with cut()) and then in turn, make each part an “unknown” whereas the rest will become training subset.

    As you can see from the table, 32% of Iris virginica were misclassified. The last is possible to improve, if we change either distance metric, or clustering method. For example, Ward’s method of clustering gives more separated clusters and slightly better misclassification rates. Please try it yourself.

    Hierarchical clustering does not by default return any variable importance. However, it is still possible to assist the feature selection with clustering heatmap (Figure \(\PageIndex{17}\)):

    Code \(\PageIndex{27}\) (R):

    aa <- read.table("data/atmospheres.txt", h=TRUE, sep="	", row.names=1)
    library(cetcolor)
    heatmap(t(aa), col=cet_pal(12, "coolwarm"), margins=c(9, 6))

    (Here we also used cetcolor package which allows to create perceptually uniform color palettes.)

    Heatmap separately clusters rows and columns and places result of the image() function in the center. Then it become visible which characters influence which object clusters and vice versa. On this heatmap, for example, Mars and Venus cluster together mostly because of similar levels of carbon dioxide.

    There are too many irises to plot the resulted dendrogram in the common way. One workaround is to select only some irises (see below). Another method is to use function Ploth() (Figure \(\PageIndex{18}\)):

    Code \(\PageIndex{28}\) (R):

    Ploth(iris.hclust, col=as.numeric(iris[, 5]), pch=16, col.edges=TRUE, horiz=TRUE, leaflab="none") # asmisc.r
    legend("topleft", fill=1:nlevels(iris[, 5]), legend=levels(iris[, 5]))
    
    Screen Shot 2019-01-23 at 10.21.22 PM.png Figure \(\PageIndex{17}\) Clustering heatmap for atmosphere data.

    Ploth() is useful also if one need simply to rotate the dendrogram. Please check the following yourself:

    Code \(\PageIndex{29}\) (R):

    tox <- read.table("data/poisoning.txt", h=TRUE)
    oldpar <- par(mar=c(2, 0, 0, 4))
    tox.dist <- as.dist(1 - abs(tox.cor))
    Ploth(hclust(tox.dist, method="ward.D"), horiz=TRUE) # asmisc.r
    par(oldpar)

    (This is also a demonstration of how to use correlation for the distance. As you will see, the same connection between Caesar salad, tomatoes and illness could be visualized with dendrogram. There visible also some other interesting relations.)

    Screen Shot 2019-01-23 at 10.24.37 PM.png Figure \(\PageIndex{18}\) Hierarchical clustering of iris data.

    file.

    Planet Aqua is entirely covered by shallow water. This ocean is inhabited with various flat organisms (Figure \(\PageIndex{19}\)). These creatures (we call them “kubricks”) can photosynthesize and/or eat other organisms or their parts (which match with the shape of their mouths), and move (only if they have no stalks). Provide the dendrogram for kubrick species based on result of hierarchical clustering.

    Screen Shot 2019-01-23 at 10.26.56 PM.png Figure \(\PageIndex{19}\) Eight species of kubricks.

    How to know the best clustering method

    Hierarchical cluster analysis and relatives (e.g., phylogeny trees) are visually appealing, but there are three important questions which need to be solved: (1) which distance is the best (this also relevant to other distance-based methods); (2) which hierarchical clustering method is the best; and (3) how to assess stability of clusters.

    Second question is relatively easy to answer. Function Co.test(dist, tree) from asmisc.r reveals consistency between distance object and hierachical clusterization. It is essentially correlation test between initial distances and distances revealed from cophenetic structure of the dendrogram.

    Cophenetic distances are useful in many ways. For example, to choose the best clusterization method and therefore answer the second question, one might use cophenetic-based

    Code \(\PageIndex{30}\) (R):

    mo <- read.table("http://ashipunov.info/shipunov/open/moldino.txt", h=TRUE, sep="	", row.names=1)
    m1 <- t((mo > 0) * 1) # convert to occurrence 0/1 data and transpose
    library(smirnov)
    m1.Txy <- smirnov(m1)
    m1.s <- (1 - m1.Txy) # similarity to dissimilarity
    PlotBest.hclust(as.dist(m1.s)) # asmisc.r

    (Make and review this plot yourself. Which clustering is better?)

    Note, however, these “best” scores are not always best for you. For example, one might still decide to use ward.D because it makes clusters sharp and visually separated.

    To choose the best distance method, one might use the visually similar approach:

    Code \(\PageIndex{31}\) (R):

    mo <- read.table("http://ashipunov.info/shipunov/open/moldino.txt", h=TRUE, sep="	", row.names=1)
    m1 <- t((mo > 0) * 1) # convert to occurrence 0/1 data and transpose
    PlotBest.dist(m1) # asmisc.r

    (Again, please review the plot yourself.)

    In fact, it just visualizes the correlation between multidimensional scaling of distances and principal component analysis of raw data. Nevertheless, it is still useful.

    How to compare clusterings

    Hierarchical clustering are dendrograms and it is not easy to compare them “out of the box”. Several different methods allow to compare two trees.

    We can employ methods associated with biological phylogenies (these trees are essentially dendrograms).

    Suppose that there are two clusterings:

    Code \(\PageIndex{32}\) (R):

    aa <- read.table("data/atmospheres.txt", h=TRUE, sep="	", row.names=1)
    aa.d1 <- hclust(dist(t(aa)))
    aa.d2 <- hclust(as.dist(1 - abs(cor(aa, method="s"))), method="ward.D")

    Library ape has dist.topo() function which calculates topological distance between trees, and library phangorn calculates several those indexes:

    Code \(\PageIndex{33}\) (R):

    aa <- read.table("data/atmospheres.txt", h=TRUE, sep="	", row.names=1)
    library(ape)
    aa.d1 <- hclust(dist(t(aa)))
    aa.d2 <- hclust(as.dist(1 - abs(cor(aa, method="s"))), method="ward.D")
    aa.ph1 <- unroot(as.phylo(aa.d1)) # convert
    aa.ph2 <- unroot(as.phylo(aa.d2))
    dist.topo(aa.ph1, aa.ph2)
    phangorn::treedist(aa.ph1, aa.ph2)

    Next possibility is to plot two trees side-by-side and show differences with lines connecting same tips (Figure \(\PageIndex{20}\)):

    Code \(\PageIndex{34}\) (R):

    aa <- read.table("data/atmospheres.txt", h=TRUE, sep="	", row.names=1)
    library(ape)
    aa.d1 <- hclust(dist(t(aa)))
    aa.d2 <- hclust(as.dist(1 - abs(cor(aa, method="s"))), method="ward.D")
    aa.ph1 <- unroot(as.phylo(aa.d1)) # convert
    aa.ph2 <- unroot(as.phylo(aa.d2))
    ass <- cbind(aa.ph1$tip.label, aa.ph1$tip.label)
    aa.ph2r <- rotate(aa.ph2, c("Earth", "Neptune"))
    cophyloplot(aa.ph1, aa.ph2r, assoc=ass, space=30, lty=2)

    (Note that sometimes you might need to rotate branch with rotate() function. Rotation does not change dendrogram.)

    Screen Shot 2019-01-23 at 10.28.12 PM.png Figure \(\PageIndex{20}\) Side-by-side dendrogram plot for athmosphere data.

    There is also possible to plot consensus tree which shows only those clusters which appear in both clusterings:

    Code \(\PageIndex{35}\) (R):

    aa <- read.table("data/atmospheres.txt", h=TRUE, sep="	", row.names=1)
    library(ape)
    aa.d1 <- hclust(dist(t(aa)))
    aa.d2 <- hclust(as.dist(1 - abs(cor(aa, method="s"))), method="ward.D")
    aa.ph1 <- unroot(as.phylo(aa.d1)) # convert
    aa.ph2 <- unroot(as.phylo(aa.d2))
    aa.ph2r <- rotate(aa.ph2, c("Earth", "Neptune"))
    plot(consensus(aa.ph1, aa.ph2r))

    (Please make this plot yourself.)

    Heatmap could also be used to visualize similarities between two dendrograms:

    Code \(\PageIndex{36}\) (R):

    aa <- read.table("data/atmospheres.txt", h=TRUE, sep="	", row.names=1)
    library(ape)
    aa.d1 <- hclust(dist(t(aa)))
    aa.d2 <- hclust(as.dist(1 - abs(cor(aa, method="s"))), method="ward.D")
    aa12.match <- Hclust.match(aa.d1, aa.d2) # asmisc.r
    library(cetcolor)
    cols <- cet_pal(max(aa12.match), "blues")
    kheatmap(aa12.match, scale="none", col=cols)

    (Hclust.match() counts matches between two dendrograms (which based on the same data) and then heatmap() plots these counts as colors, and also supplies the consensus configuration as two identical dendrograms on the top and on the left. Please make this plot yourself.)

    Both multidimendional scaling and hierarchical clustering are distance-based methods. Please make and review the following plot (from the vegan3d package) to understand how to compare them:

    Code \(\PageIndex{37}\) (R):

    mo <- read.table("http://ashipunov.info/shipunov/open/moldino.txt", h=TRUE, sep="	", row.names=1)
    m1 <- t((mo > 0) * 1) # convert to occurrence 0/1 data and transpose
    library(smirnov)
    m1.Txy <- smirnov(m1)
    m1.s <- (1 - m1.Txy) # similarity to dissimilarity
    m1.dist <- as.dist(m1.s)
    m1.hclust <- hclust(m1.dist)
    m1.c <- cmdscale(m1.dist)
    library(vegan3d)
    orditree3d(m1.c, m1.hclust, text=attr(m1.dist, "Labels"), type="t")

    How good are resulted clusters

    There are several ways to check how good are resulted clusters, and many are based on the bootstrap replication (see Appendix).

    Function Jclust() presents a method to bootstrap bipartitions and plot consensus tree with support values (Figure \(\PageIndex{21}\):

    Code \(\PageIndex{38}\) (R):

    mo <- read.table("http://ashipunov.info/shipunov/open/moldino.txt", h=TRUE, sep="	", row.names=1)
    m1 <- t((mo > 0) * 1) # convert to occurrence 0/1 data and transpose
    (m1.j <- Jclust(m1, 3, iter=1000))
    plot(m1.j, rect.lty=2, sub="")
    Screen Shot 2019-01-23 at 10.30.27 PM.png Figure \(\PageIndex{21}\) Bootstrap stability of 3-cluster solution for lake islands data (1000 iterations)

    (Note that Jclust() uses cutree() and therefore works only if it “knows” the number of desired clusters. Since consensus result relates with cluster number, plots with different numbers of clusters will be different.)

    Another way is to use pvclust package which has an ability to calculate the support for clusters via bootstrap (Figure \(\PageIndex{22}\)):

    Code \(\PageIndex{39}\) (R):

    mo <- read.table("http://ashipunov.info/shipunov/open/moldino.txt", h=TRUE, sep="	", row.names=1)
    m1 <- t((mo > 0) * 1) # convert to occurrence 0/1 data and transpose
    library(pvclust)
    m1.pvclust <- pvclust(t(m1), method.dist="manhattan", nboot=100, parallel=TRUE)
    plot(m1.pvclust, col.pv=c("darkgreen", 0, 0), main="")
    Screen Shot 2019-01-23 at 10.32.45 PM.png Figure \(\PageIndex{22}\) Dendrogram with supporting values (pvclust package)

    (Function pvclust() clusterizes columns, not rows, so we have to transpose data again. On the plot, numerical values of cluster stability (au) are located above each node. The closer are these values to 100, the better.)

    There is also BootA() function in asmisc.r set which allows to bootstrap clustering with methods from phylogenetic package ape:

    Code \(\PageIndex{40}\) (R):

    mo <- read.table("http://ashipunov.info/shipunov/open/moldino.txt", h=TRUE, sep="	", row.names=1)
    m1 <- t((mo > 0) * 1) # convert to occurrence 0/1 data and transpose
    m1.ba <- BootA(m1, FUN=function(.x) # asmisc.r
    as.phylo(hclust(dist(.x, method="minkowski"), method="average")), iter=100, mc.cores=4)
    plot(m1.ba$boot.tree, show.node.label=TRUE)
    plot(m1.ba$cons.tree) # majority-rule consensus

    (This method requires to make an anonymous function which uses methods you want. It also plots both consensus tree (without support values) and original tree with support values. Please make these trees. Note that by default, only support values greater then 50% are shown.)

    Making groups: k-means and friends

    Apart from hierarchical, there are many other ways of clustering. Typically, they do not return any ordination (“map”) and provide only cluster membership. For example, k-means clustering tries to obtain the a priori specified number of clusters from the raw data (it does not need the distance matrix to be supplied):

    Code \(\PageIndex{41}\) (R):

    eq <- read.table("data/eq.txt", h=TRUE)
    eq.k <- kmeans(eq[,-1], 2)

    K-means clustering does not plot trees; instead, for every object it returns the number of its cluster:

    Code \(\PageIndex{42}\) (R):

    eq <- read.table("data/eq.txt", h=TRUE)
    eq.k <- kmeans(eq[,-1], 2)
    str(eq.k$cluster)
    Misclass(eq.k$cluster, eq$SPECIES) # asmisc.r

    (As you see, misclassification errors are low.)

    Instead of a priori cluster number, function kmeans() also accepts row numbers of cluster centers.

    Spectral clustering from kernlab package is superficially similar method capable to separate really tangled elements:

    Code \(\PageIndex{43}\) (R):

    library(kernlab)
    data(spirals)
    set.seed(1)
    sc <- specc(spirals, centers=2)
    plot(spirals, col=sc, xlab="", ylab="")
    Screen Shot 2019-01-23 at 10.34.23 PM.png Figure \(\PageIndex{23}\) Kernel-based spectral clustering is capable to separate two spirals.

    Kernel methods (like spectral clustering) recalculate the primary data to make it more suitable for the analysis. Support vector machines (SVM, see below) is another example. There is also kernel PCA (function kpca() in kernlab package).

    Next group of clustering methods is based on fuzzy logic and takes into account the fuzziness of relations. There is always the possibility that particular object classified in the cluster A belongs to the different cluster B, and fuzzy clustering tries to measure this possibility:

    Code \(\PageIndex{44}\) (R):

    library(cluster)
    iris.f <- fanny(iris[, 1:4], 3)
    head(data.frame(sp=iris[, 5], iris.f$membership))

    Textual part of the fanny() output is most interesting. Every row contains multiple membership values which represent the probability of this object to be in the particular cluster. For example, sixth plant most likely belongs to the first cluster but there is also visible attraction to the third cluster. In addition, fanny() can round memberships and produce hard clustering like other cluster methods:

    Code \(\PageIndex{45}\) (R):

    Misclass(iris.f$clustering, factor(iris[, 5], levels=c("setosa", "virginica", "versicolor"))) # asmisc.r

    (We had to re-level the Species variable because fanny() gives number 2 to the Iris virginica cluster.)

    How to know cluster numbers

    All “k-means and friends” methods want to know the number of clusters before they start. So how to know a priori how many clusters present in data? This question is one of the most important in clustering, both practically and theoretically.

    The visual analysis of banner plot (invented by Kaufman & Rousseeuw, 1990) could predict this number (Figure \(\PageIndex{24}\)):

    Code \(\PageIndex{46}\) (R):

    eq <- read.table("data/eq.txt", h=TRUE)
    library(cluster)
    eq.a <- agnes(eq[,-1])
    plot(eq.a, which=1, col=c(0, "maroon"))
    Screen Shot 2019-01-23 at 10.41.12 PM.png Figure \(\PageIndex{25}\) Banner plot. White bars suggest possible cluster partitions.

    White bars on the left represent unclustered data, maroon lines on the right show height of possible clusters. Therefore, two clusters is the most natural solution, four clusters should be the next possible option.

    Model-based clustering allows to determine how many clusters present in data and also cluster membership. The method assumes that clusters have the particular nature and multidimensional shapes:

    Code \(\PageIndex{47}\) (R):

    library(mclust)
    iris.mclust <- Mclust(iris[, -5])
    summary(iris.mclust) # 2 clusters
    iris.mclust$classification

    (As you see, it reveals two clusters only. This is explanable because in iris data two species are much more similar then the third one.)

    DBSCAN is the powerful algorithm for the big data (like raster images which consist of billions of pixels) and there is the R package with the same name (in lowercase). DBSCAN reveals how many clusters are in data at particular resolution:

    Code \(\PageIndex{48}\) (R):

    library(dbscan)
    kNNdistplot(iris[, -5], k = 5)
    abline(h=.5, col = "red", lty=2)
    (iris.dbscan <- dbscan(iris[, -5], eps = .5, minPts = 5))
    plot(iris.p, type="n", xlab="", ylab="")
    text(iris.p, labels=abbreviate(iris[, 5], 1, method="both.sides"), col=iris.dbscan$cluster+1)

    (Plots are not shown, please make then yourself. First plot helps to find the size of neighborhood (look on the knee). The second illustrates results. Similar to model-based clustering, DBSCAN by default reveals only two clusters in iris data.)

    Note that while DBSCAN was not able to recover all three species, it recovered clouds, and also places marginal points in the “noise” group. DBSCAN, as you see, is useful for smoothing, important part of image recognition. Parameter eps allows to change “resolution” of clustering and to find more, or less, clusters. DBSCAN relates with t-SNE (see above) and with supervised methods based on proximity (like kNN, see below). It can also be supervised itself and predict clusters for new points. Note that k-means and DBSCAN are based on specifically calculated proximities, not directly on distances.

    Data stars contains information about 50 brightest stars in the night sky, their location and constellations. Please use DBSCAN to make artificial constellations on the base of star proximity. How are they related to real constellations?

    Note that location (right ascension and declination) is given in degrees or hours (sexagesimal system), they must be converted into decimals.

    “Mean-shift” method searches for modes within data, which in essence, is similar to finding proximities. The core mean-shift algotithm is slow so approximate “blurring” version is typically preferable:

    Code \(\PageIndex{49}\) (R):

    library(MeanShift)
    bandwidth <- quantile(dist(iris[, -5]), 0.25)
    (bmsClustering(t(iris[, -5]), h=bandwidth))

    Another approach to find cluster number is similar to the PCA screeplot:

    Code \(\PageIndex{50}\) (R):

    eq <- read.table("data/eq.txt", h=TRUE)
    library(cluster)
    eq.a <- agnes(eq[,-1])
    wss <- (nrow(eq[,-1]) - 1) * sum(apply(eq[,-1], 2, var))
    for (i in 2:15) wss[i] <- sum(kmeans(eq[,-1], centers=i)$withinss)
    barplot(wss, names.arg=1:15, xlab="Number of clusters", main="Sums of squares within groups", yaxt="n", ylab="")

    (Please check this plot yourself. As on the banner plot, it visible that highest relative “cliffs” are after 1 and 4 cluster numbers.)

    Collection asmisc.r contains function Peaks() which helps to find local maxima in simple data sequence. Number of these peaks on the histogram (with the sensible number of breaks) should point on the number of clusters:

    Code \(\PageIndex{51}\) (R):

    histdata <- hist(apply(scale(iris[, -5]), 1, function(.x) sum(abs(.x))), breaks=10, plot=FALSE)
    sum(Peaks(histdata$counts))
    > 
    [1] 3

    (“Three” is the first number of peaks after “one” and does not change when 8 < breaks < 22.)

    Finally, the integrative package NbClust allows to use diverse methods to assess the putative number of clusters:

    Code \(\PageIndex{52}\) (R):

    library(NbClust)
    iris.nb <- NbClust(iris[, -5], method="ward.D") # wait!

    How to compare different ordinations

    Most of classification methods result in some ordination, 2D plot which includes all data points. This allow to compare them with Procrustes analysis (see Appendix for more details) which rotates ans scales one data matrix to make in maximally similar with the second (target) one. Let us compare results of classic PCA and t-SNE:

    Code \(\PageIndex{53}\) (R):

    irisu.pca <- prcomp(iris.unique[, 1:4], scale=TRUE)
    irisu.p <- irisu.pca$x[, 1:2]
    library(vegan)
    irisu.pr <- procrustes(irisu.p, tsne.out$Y)
    plot(irisu.pr, ar.col=iris.unique$Species, xlab="", ylab="", main="") # arrows point to target (PCA)
    with(iris.unique, legend("topright", pch="↑", col=1:nlevels(Species), legend=levels(Species), bty="n"))
    legend("bottomright", lty=2:1, legend=c("PCA", "t-SNE"), bty="n")

    Resulted plot (Figure 7.3.1) shows how dense are points in t-SNE and how PCA spreads them. Which of methods makes better grouping? Find it yourself.

    References

    1. Package Boruta is especially god for all relevant feature selection.


    This page titled 1.6.8.2: Classification without learning is shared under a Public Domain license and was authored, remixed, and/or curated by Alexey Shipunov.

    • Was this article helpful?