Clusters represent subgroups within the data that share similar
patterns. Such patterns may reflect similar temporal dynamics (when we
are analyzing sequence data, for example) or relationships between
variables (as is the case in psychological networks). Units within the
same cluster are more similar to each other, while units in different
clusters differ more substantially. In this vignette, we demonstrate how
to perform clustering on sequence data using Nestimate.
To illustrate clustering, we will use the human_long
dataset, which contains 10,796 coded human interactions from 429
human-AI pair programming sessions across 34 projects. Each row
represents a single interaction event with a timestamp, session
identifier, and action label.
library(Nestimate)
data("human_long")
# Subsample for vignette speed (CRAN build-time limit)
set.seed(1)
keep <- sample(unique(human_long$session_id), 80)
human_sub <- human_long[human_long$session_id %in% keep, ]
head(human_sub)
#> message_id project session_id timestamp session_date code
#> 395 2902 Project_7 0605767ae57f 1772229600 2026-02-28 Specify
#> 396 2902 Project_7 0605767ae57f 1772229600 2026-02-28 Command
#> 397 2902 Project_7 0605767ae57f 1772229600 2026-02-28 Request
#> 398 2902 Project_7 0605767ae57f 1772229600 2026-02-28 Specify
#> 399 2903 Project_7 0605767ae57f 1772229600 2026-02-28 Interrupt
#> 400 2905 Project_7 0605767ae57f 1772229600 2026-02-28 Command
#> cluster code_order order_in_session
#> 395 Directive 1 1
#> 396 Directive 2 2
#> 397 Directive 3 3
#> 398 Directive 4 4
#> 399 Metacognitive 1 5
#> 400 Directive 1 8We can build a transition network using this dataset using
build_network. We need to determine the actor
(session_id), the action
(cluster), and the time
(timestamp). We will use the overall network object as the
starting point to find subgroups since it structures the raw data into
the appropriate units of analysis to perform clustering.
Dissimilarity-based clustering groups units of analysis (in our case,
sessions, since that is what we provided as actor) by
directly comparing their observed sequences. In our case, each session
is represented by its sequence of actions, and similarity between
sessions is defined using a distance metric that quantifies how
different two sequences are.
To implement this method using Nestimate, we can use the
build_clusters() function, which takes either raw sequence
data or a network object such as the net object that we
estimated (which also contains the original sequences in
$data):
clust <- build_clusters(net, k = 3)
clust
#> Sequence Clustering
#> Method: pam
#> Dissimilarity: hamming
#> Clusters: 3
#> Silhouette: 0.3902
#> Cluster sizes: 59, 38, 1
#> Medoids: 15, 9, 41The default clustering mechanism uses Hamming distance (number of positions where sequences differ) with PAM (Partitioning Around Medoids).
The result contains the cluster assignments (which
cluster each session belongs to), the cluster sizes, and a
silhouette score that reflects the quality of the
clustering (higher values indicate better separation between clusters),
among other useful information.
# Cluster assignments (first 20 sessions)
head(clust$assignments, 20)
#> [1] 1 1 2 2 2 1 1 1 2 2 2 1 1 1 1 2 1 2 1 1
# Cluster sizes
clust$sizes
#> 1 2 3
#> 59 38 1
# Silhouette score (clustering quality: higher is better)
clust$silhouette
#> [1] 0.3901787The silhouette plot shows how well each sequence fits its assigned cluster. Values near 1 indicate good fit; values near 0 suggest the sequence is between clusters; negative values indicate possible misclassification.
The MDS (multidimensional scaling) plot projects the distance matrix to
2D, showing cluster separation.
A distance metric defines how (dis)similarity between sequences is
measured. In other words, it quantifies how different two sequences are
from each other. Nestimate currently supports 9 distance
metrics for comparing sequences:
| Metric | Description | Best for |
|---|---|---|
hamming |
Positions where sequences differ | Equal-length sequences |
lv |
Levenshtein (edit distance) | Variable-length, insertions/deletions |
osa |
Optimal string alignment | Edit distance + transpositions |
dl |
Damerau-Levenshtein | Full edit + adjacent transpositions |
lcs |
Longest common subsequence | Preserving order, ignoring gaps |
qgram |
Q-gram frequency difference | Pattern-based similarity |
cosine |
Cosine of q-gram vectors | Normalized pattern similarity |
jaccard |
Jaccard index of q-grams | Set-based pattern overlap |
jw |
Jaro-Winkler | Short strings, typo detection |
Different metrics may produce different clustering results. You need to choose this based on your research question:
We can specify which distance metric we want to use through the
dissimilarity argument:
# Levenshtein distance (allows insertions/deletions)
clust_lv <- build_clusters(net, k = 3, dissimilarity = "lv")
clust_lv$silhouette
#> [1] 0.5277661
# Longest common subsequence
clust_lcs <- build_clusters(net, k = 3, dissimilarity = "lcs")
clust_lcs$silhouette
#> [1] 0.227412Some distance metrics may take additional arguments. For example, the Hamming distance accepts temporal weighting to emphasize earlier or later positions:
By default, Nestimate uses PAM (Partitioning Around Medoids) to form
clusters, which assigns each sequence to the cluster represented by the
most central sequence (the medoid). Besides PAM, Nestimate
supports hierarchical clustering methods, which build clusters step by
step by progressively merging similar units into a tree-like structure
(a dendrogram):
ward.D2 (“Ward’s Method, Squared Distances”): Minimizes
the increase in within-cluster variance using squared distances.
Typically produces compact, well-separated clusters.ward.D (“Ward’s Method”): An alternative implementation
of Ward’s approach using a different distance formulation. Similar
behavior, but results may vary slightly.complete (“Complete Linkage”): Defines the distance
between clusters as the maximum distance between their members. Produces
tight, compact clusters.average (“Average Linkage”): Uses the average distance
between all pairs of points across clusters. Provides a balance between
compactness and flexibility.single (“Single Linkage”): Uses the minimum distance
between points in two clusters. Can capture chain-like structures but
may lead to - loosely connected clusters.mcquitty (“McQuitty’s Method” / “WPGMA”): A weighted
version of average linkage that gives equal weight to clusters
regardless of size.centroid (“Centroid Linkage”): Defines cluster distance
based on the distance between cluster centroids (means). Can produce
intuitive groupings but may introduce inconsistencies in the
hierarchy.To use any of these methods instead of PAM, we need to provide the
method argument to build_clusters.
To choose the right clustering solution and method, we need to compare the silhouette scores across different k values and clustering methods (and also distance metrics if we want):
methods <- c("pam", "ward.D2", "complete", "average")
silhouettes <- lapply(methods, function(m) {
sapply(2:4, function(k) {
build_clusters(net, k = k, method = m, seed = 42)$silhouette
})
})
names(silhouettes) <- methods
silhouettes
#> $pam
#> [1] 0.3802219 0.3901787 0.3375285
#>
#> $ward.D2
#> [1] 0.8221011 0.4119890 0.4202447
#>
#> $complete
#> [1] 0.8221011 0.6411682 0.6190995
#>
#> $average
#> [1] 0.8221011 0.6411682 0.6190995methods <- names(silhouettes)
colors <- rainbow(length(methods))
plot(2:4, silhouettes[[1]], type = "b", pch = 19, col = colors[1],
xlab = "Number of clusters (k)",
ylab = "Average silhouette width",
ylim = c(0, 1),
main = "Choosing k")
for (i in 2:length(methods)) {
lines(2:4, silhouettes[[i]], type = "b", pch = 19, col = colors[i])
}
legend("topright", legend = methods, col = colors, lty = 1, pch = 19)Higher silhouette scores indicate better-defined clusters. Look for
an “elbow” or maximum. Here we select ward.D2 with 2
clusters, which yields a reasonable silhouette width.
Instead of clustering sequences based on how similar they are to one another, we can cluster them together based on their transition dynamics. Mixture Markov models (MMM) fit separate Markov models, and sequences are assigned to the cluster whose transition structure best matches their observed behavior.
To implement MMM, we can use the build_mmm provided by
Nestimate, and we pass the sequence data or network
estimated and the number of clusters (k, by default 2)
We can inspect the results using summary and obtain the
cluster assignment from the results using
mmm_default$assignments.
summary(mmm_default)
#> Mixed Markov Model
#> k = 2 | 98 sequences | 3 states
#> LL = -1540.6 | BIC = 3159.1 | ICL = 3195.2
#>
#> Cluster Size Mix%% AvePP
#> ------------------------------
#> 1 37 39.1% 0.819
#> 2 61 60.9% 0.868
#>
#> Overall AvePP = 0.850 | Entropy = 0.448 | Class.Err = 0.0%
#>
#> --- Cluster 1 (39.1%, n=37) ---
#> Directive Evaluative Metacognitive
#> Directive 0.685 0.089 0.226
#> Evaluative 0.539 0.120 0.342
#> Metacognitive 0.502 0.184 0.314
#>
#> --- Cluster 2 (60.9%, n=61) ---
#> Directive Evaluative Metacognitive
#> Directive 0.561 0.294 0.145
#> Evaluative 0.449 0.473 0.077
#> Metacognitive 0.435 0.413 0.153
head(mmm_default$assignments,10)
#> [1] 2 2 1 2 1 2 2 2 2 2Once sequences are clustered, we can create separate networks by
cluster. We need to pass the clustering result to
build_network and use the group argument to
indicate that we want to group by cluster assignment.
We may also compare which transition probabilities differ significantly among clusters using permutation testing: