The k -modes clustering algorithm has been widely used to cluster categorical data. In this paper, we firstly analyzed the k -modes algorithm and its dissimilarity measure. Based on this, we then proposed a novel dissimilarity measure, which is named as GRD. GRD considers not only the relationships between the object and all cluster modes but also the differences of different attributes.
The performance of the dissimilarity matrix algorithm was evaluated on multiple hardware platforms in Tables 1 and 2. CPU-based tests were performed on a compute node containing a dual-socket Intel Xeon E5-2687W-v3 at 3.5 GHz and 512GB RAM. The GPU tests were run on the same class of system paired with either an NVIDIA Quadro M6000 (24GB model), or an NVIDIA Tesla K80. The Quadro M6000.
Compared to the k-means approach in kmeans, the function pam has the following features: (a) it also accepts a dissimilarity matrix; (b) it is more robust because it minimizes a sum of dissimilarities instead of a sum of squared euclidean distances; (c) it provides a novel graphical display, the silhouette plot (see plot.partition) (d) it allows to select the number of clusters using mean.
K-Medoids. The k-medoids algorithm (Kaufman, L., Rousseeuw, P., 1987) is a clustering algorithm related to the k-means algorithm and the medoid shift algorithm. Both the k-means and k-medoids algorithms are partitional and both attempt to minimize the distance between points labeled to be in a cluster and a point designated as the center of that cluster.
In my case I have constructed my own dissimilarity matrix via ks.boot in R to calculate p-values for all my studies so 10 studies would generate a 10x10 matrix of p-values. I then subtracted 1 from each p-value and covered this matrix into a distance object. I used hclust to cluster my data using single, average and complete algorithms.
The function computes dissimilarity indices that are useful for or popular with community ecologists. All indices use quantitative data, although they would be named by the corresponding binary index, but you can calculate the binary index using an appropriate argument. If you do not find your favourite index here, you can see if it can be implemented using designdist.
These techniques can be seen as the simultaneous version of two procedures based on the sequential application of k-means and Tucker2 algorithms and vice versa. The two techniques, T3Clus and 3Fk.
Standard clustering approaches, such as hierarchical clustering and k-means clustering, are applied to group the subjects after the dissimilarity matrix correction. If the true classification is known, then we can use the adjusted rand index (ARI) to check the agreement between the predicted and the true classification ( Hubert and Arabie, 1985 ).