This vignette is a comprehensive tutorial on how to use test the stability and sensitivity of MCDA methods. # Required Packages The recent version of the package mcdabench from CRAN is installed with the following command:
install.packages("mcdabench", dep=TRUE)If you have already installed mcdabench, you can load it into R working environment by using the following command:
library(mcdabench)As an example, the egrids dataset in the package contains simulated data representing different energy management strategies or system configurations for optimizing smart grids. The dataset includes 12 alternatives and 10 criteria, which evaluate smart grids in terms of efficiency, reliability, environmental compatibility, and cost-effectiveness.
# Load the data set
data(egrids)
# Extract the decision matrix, benefit-cost vector and weights
dmat <- egrids$dmat
bc <- egrids$bcvec
userwei <- egrids$weights
print(egrids)## $dmat
## C1 C2 C3 C4 C5 C6 C7 C8 C9 C10
## G1 85 92 88 75 0.0050 120 98 0.30 95 1.20
## G2 80 90 85 78 0.0070 115 97 0.40 93 1.50
## G3 82 88 87 70 0.0040 110 95 0.35 96 1.10
## G4 78 85 82 80 0.0060 125 99 0.25 94 1.40
## G5 90 95 92 74 0.0055 118 96 0.33 97 1.30
## G6 88 91 89 76 0.0062 112 94 0.28 92 1.40
## G7 81 89 83 79 0.0071 130 100 0.22 98 1.60
## G8 76 83 80 77 0.0065 127 98 0.29 91 1.35
## G9 89 94 90 73 0.0058 122 97 0.27 90 1.25
## G10 87 90 88 72 0.0049 108 93 0.32 89 1.18
## G11 79 84 81 75 0.0067 117 96 0.31 95 1.50
## G12 77 86 79 71 0.0053 105 91 0.26 88 1.22
##
## $bcvec
## [1] 1 1 1 1 -1 -1 -1 -1 -1 -1
##
## $weights
## C1 C2 C3 C4 C5 C6 C7 C8 C9 C10
## 0.15 0.12 0.10 0.08 0.07 0.13 0.10 0.08 0.12 0.05
In the following code snippet, the weights are calculated applying various methods on the decision matrix.
nmat <- calcnormal(dmat, bc, type="vector")
critwei <- calcweights(dmatrix=nmat, bcvec=bc, type="critic")
entwei <- calcweights(dmatrix=nmat, bcvec=bc, type="entropy")
equwei <- calcweights(dmatrix=nmat, bcvec=bc, type="equal")
giniwei <- calcweights(dmatrix=nmat,bcvec=bc, type="gini")
sdevwei <- calcweights(dmatrix=nmat, bcvec=bc, type="sdev")
merecwei <- calcweights(dmatrix=nmat, bcvec=bc, type="merec")
mpsiwei <- calcweights(dmatrix=nmat, bcvec=bc, type="mpsi")
geomwei <- calcweights(dmatrix=nmat, bcvec=bc, type="geom")
rocwei <- calcweights(dmatrix=nmat, bcvec=bc, type="roc")
rswei <- calcweights(dmatrix=nmat, bcvec=bc, type="rs")
wmatrix <- cbind(Equal=equwei, Merec=merecwei, Geometric=geomwei, Gini=giniwei,
Critic=critwei, Mpsi=mpsiwei, Entropy=entwei, StdDev=sdevwei, Rs=rswei, Roc=rocwei )
print(round(wmatrix,3))## Equal Merec Geometric Gini Critic Mpsi Entropy StdDev Rs Roc
## C1 0.1 0.098 0.086 0.142 0.066 0.128 0.168 0.079 0.182 0.341
## C2 0.1 0.099 0.094 0.103 0.051 0.106 0.089 0.057 0.164 0.171
## C3 0.1 0.099 0.091 0.119 0.057 0.111 0.119 0.066 0.145 0.114
## C4 0.1 0.100 0.098 0.101 0.090 0.103 0.084 0.056 0.127 0.085
## C5 0.1 0.103 0.125 0.152 0.180 0.094 0.192 0.210 0.109 0.068
## C6 0.1 0.100 0.098 0.064 0.079 0.100 0.034 0.088 0.091 0.057
## C7 0.1 0.102 0.122 0.026 0.032 0.085 0.006 0.036 0.073 0.049
## C8 0.1 0.099 0.091 0.151 0.273 0.074 0.200 0.212 0.055 0.043
## C9 0.1 0.100 0.101 0.033 0.046 0.106 0.009 0.046 0.036 0.038
## C10 0.1 0.099 0.094 0.109 0.125 0.094 0.099 0.150 0.018 0.034
parcorplot(wmatrix, xl="Weighting Methods", yl="Weight", lt="Criteria")In the following code snippet, we apply the TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) method using each of the previously calculated weight sets. his demonstrates how different weighting strategies can lead to varying rankings of alternatives.
critrank <- topsis(dmatrix=dmat, bcvec=bc, weights=critwei)$rank
entrank <- topsis(dmatrix=dmat, bcvec=bc, weights=entwei)$rank
equrank <- topsis(dmatrix=dmat, bcvec=bc, weights=equwei)$rank
ginirank <- topsis(dmatrix=dmat, bcvec=bc, weights=giniwei)$rank
sdevrank <- topsis(dmatrix=dmat, bcvec=bc, weights=sdevwei)$rank
geomrank <- topsis(dmatrix=dmat, bcvec=bc, weights=geomwei)$rank
merecrank <- topsis(dmatrix=dmat, bcvec=bc, weights=merecwei)$rank
mpsirank <- topsis(dmatrix=dmat, bcvec=bc, weights=mpsiwei)$rank
rocrank <- topsis(dmatrix=dmat, bcvec=bc, weights=rocwei)$rank
rsrank <- topsis(dmatrix=dmat, bcvec=bc, weights=rswei)$rank
topsisranks <- cbind(Equal=equrank, Critic=critrank, StdDev=sdevrank, Gini=ginirank, Geometric=geomrank,
Merec=merecrank, Mpsi=mpsirank, Entropy=entrank, Roc=rocrank, Rs=rsrank)The topsisranks matrix, shown above, displays the resulting rankings of alternatives when TOPSIS is applied with each of the ten weighting methods. By comparing the ranks across columns, one can observe the impact of different weighting schemes on the final decision outcomes.
rownames(topsisranks) <- rownames(dmat)
print(t(topsisranks))## G1 G2 G3 G4 G5 G6 G7 G8 G9 G10 G11 G12
## Equal 5 9 7 8 4 2 10 11 3 1 12 6
## Critic 3 12 8 4 9 6 7 10 2 5 11 1
## StdDev 2 12 4 8 6 7 9 10 5 1 11 3
## Gini 2 11 6 8 3 5 9 10 1 4 12 7
## Geometric 7 9 6 8 5 2 11 10 3 1 12 4
## Merec 5 9 7 8 4 2 10 11 3 1 12 6
## Mpsi 5 8 6 9 4 2 10 11 3 1 12 7
## Entropy 2 12 6 7 4 5 9 10 1 3 11 8
## Roc 5 8 6 9 1 3 7 12 2 4 11 10
## Rs 5 7 6 9 1 2 8 12 3 4 11 10
The rank heatmap provides a visual representation of the rankings. The smallest rank number (1) indicates highest (best) rank, while higher rank numbers indicate lower (worse) ranks. The heatmap allows for quick identification of alternatives that consistently rank high or low across different weighting methods, as well as those whose ranks vary considerably. The row dendrogram groups similar ranking profiles.
rankheatmap(t(topsisranks), colpal=1, cellnotes=TRUE, tcol="black", dendro="row") The Principal Component Analysis (PCA) biplot of the ranks helps visualize the relationships between the different weighting methods (as variables) and the alternatives (as observations). Methods that cluster together in the plot tend to produce similar rankings. Alternatives positioned closer to specific methods are more highly ranked by those methods. This plot is useful for identifying the overall consistency of the rankings and potential outliers.
pca <- rankpca(t(topsisranks), biplot=TRUE)This section presents the results of a sensitivity analysis, specifically focusing on the stability of individual alternative ranks across the different weighting methods. The ressens1 output provides a stability table for each alternative. This table typically includes metrics like the minimum, maximum, mean, and standard deviation of an alternative’s rank across all applied weighting methods. A lower standard deviation suggests greater stability for that alternative’s rank, indicating it is less sensitive to the choice of weighting method.
ressens1 <- sensana(t(topsisranks))
print(ressens1)## $stabtable
## G1 G2 G3 G4 G5 G6 G7 G8 G9 G10 G11 G12
## SD 1.73 1.89 1.03 1.48 2.33 1.96 1.33 0.82 1.17 1.65 0.53 2.90
## CRV 0.42 0.19 0.17 0.19 0.57 0.54 0.15 0.08 0.45 0.66 0.05 0.47
## SD2 0.13 0.15 0.06 0.08 0.14 0.16 0.09 0.06 0.08 0.14 0.05 0.20
## SRSI 0.67 0.78 0.56 0.56 0.67 0.78 0.78 0.44 0.78 0.67 0.33 0.89
## RSI 0.70 0.60 0.70 0.70 0.50 0.60 0.60 0.80 0.70 0.70 0.90 0.40
##
## $sensscores
## G1 G2 G3 G4 G5 G6 G7 G8 G9 G10 G11 G12
## 1.3 1.5 1.0 0.8 1.5 1.6 1.2 0.7 0.8 1.5 0.5 2.2
##
## $sensscores2
## G1 G2 G3 G4 G5 G6 G7 G8 G9 G10 G11 G12
## 1.93 2.20 1.07 1.42 2.60 2.18 1.56 0.91 1.29 1.84 0.56 3.42
Here, we compare the rank similarities and differences between the various weighting methods using several statistical tests and similarity measures. This helps quantify how consistent or divergent the ranking outcomes are across different approaches. The rescomp object contains several matrices that quantify the similarity between the rankings produced by each weighting method:
rescomp <- rankcompare(t(topsisranks), biplot=FALSE, nperms = 100, nboot=100,
entropyopt = "jsd", alpha = 0.05, padjmethod = "bonferroni")The Spearman Rank Correlations matrix (src) shows the correlation between the ranks generated by each pair of weighting methods. Higher correlation values indicate greater similarity in rankings. In the displayed matrix, the lower triangle shows the Spearman correlation coefficients, while the upper triangle displays the significance test results.
print(rescomp$src) # Spearman rank correlations matrix## Equal Critic StdDev Gini Geometric Merec Mpsi Entropy Roc Rs
## Equal 1 ** *** *** *** *** *** *** ***
## Critic 0.57 1 ** * * *
## StdDev 0.75 0.73 1 ** ** ** ** **
## Gini 0.86 0.64 0.79 1 ** *** *** *** *** **
## Geometric 0.96 0.58 0.74 0.76 1 *** *** ** * *
## Merec 1.00 0.57 0.75 0.86 0.96 1 *** *** *** ***
## Mpsi 0.99 0.46 0.71 0.85 0.94 0.99 1 *** *** ***
## Entropy 0.85 0.66 0.80 0.98 0.74 0.85 0.83 1 ** **
## Roc 0.83 0.27 0.49 0.84 0.69 0.83 0.86 0.80 1 ***
## Rs 0.84 0.21 0.45 0.80 0.71 0.84 0.88 0.76 0.99 1
The Weight Similarity matrix (wsrs) is another measure of rank similarity, often giving more weight to highly similar ranks.
print(rescomp$wsrs) # WS similarity matrix## Equal Critic StdDev Gini Geometric Merec Mpsi Entropy Roc Rs
## Equal 0.64 0.81 0.74 0.98 1.00 1.00 0.78 0.79 0.83
## Critic 0.65 0.76 0.64 0.72 0.65 0.60 0.60 0.51 0.48
## StdDev 0.84 0.71 0.76 0.83 0.84 0.83 0.80 0.64 0.65
## Gini 0.78 0.82 0.73 0.71 0.78 0.78 0.98 0.84 0.79
## Geometric 0.97 0.65 0.82 0.72 0.97 0.97 0.76 0.76 0.79
## Merec 1.00 0.64 0.81 0.74 0.98 1.00 0.78 0.79 0.83
## Mpsi 1.00 0.64 0.81 0.74 0.98 1.00 0.79 0.80 0.84
## Entropy 0.79 0.84 0.76 0.98 0.73 0.79 0.79 0.83 0.78
## Roc 0.79 0.57 0.59 0.84 0.74 0.79 0.80 0.78 0.96
## Rs 0.83 0.49 0.57 0.79 0.78 0.83 0.84 0.73 0.96
The Wilcoxon Rank Sum Test (wilcox) matrix provides p-values from pairwise Wilcoxon tests, indicating whether there’s a statistically significant difference between the ranks generated by two methods. In the displayed matrix, the lower triangle shows the test statistic (W), while the upper triangle displays the adjusted p-values for the statistical inference.
print(rescomp$wilcox) # Wilcox test matrix## Equal Critic StdDev Gini Geometric Merec Mpsi Entropy Roc Rs
## Equal 1.00 1.00 1.00 1.00 NaN 1.00 1.00 1.00 1.00
## Critic 39.50 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
## StdDev 28.50 23.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
## Gini 27.50 35.50 18.00 1.00 1.00 1.00 1.00 1.00 1.00
## Geometric 10.50 33.00 22.50 20.50 1.00 1.00 1.00 1.00 1.00
## Merec 0.00 38.50 26.50 27.50 10.50 1.00 1.00 1.00 1.00
## Mpsi 5.00 39.00 32.50 22.50 15.00 5.00 1.00 1.00 1.00
## Entropy 32.00 24.00 14.00 10.50 26.00 32.00 28.00 1.00 1.00
## Roc 34.00 23.00 36.00 27.50 34.50 34.00 18.00 28.50 1.00
## Rs 23.50 32.50 32.50 27.00 24.00 23.50 14.00 27.50 5.00
The Rank entropy differences matrix with permutations (entper) and Rank Jensen-Shannon entropy divergence (JSD) matrix with bootstrap (entboot) are the matrices assess the uncertainty or dispersion of ranks, providing insights into the overall stability of the ranking system under permutation and bootstrap resampling.
In the displayed matrix, the lower triangle shows the Shannon entroyp differences, while the upper triangle presents the adjusted p-values for the statistical inference.
print(rescomp$entper) # Rank entropy matrix with permutations## Equal Critic StdDev Gini Geometric Merec Mpsi Entropy Roc Rs
## Equal 0.89 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
## Critic 0.49 0.98 0.97 0.95 0.94 0.89 1.00 0.78 0.72
## StdDev 0.23 0.34 1.00 1.00 1.00 1.00 1.00 0.92 0.92
## Gini 0.26 0.38 0.32 0.96 0.99 1.00 1.00 1.00 1.00
## Geometric 0.03 0.45 0.23 0.34 1.00 1.00 0.99 1.00 0.99
## Merec 0.00 0.49 0.23 0.26 0.03 1.00 1.00 1.00 1.00
## Mpsi 0.01 0.56 0.25 0.27 0.04 0.01 0.99 1.00 1.00
## Entropy 0.24 0.37 0.29 0.01 0.32 0.24 0.25 1.00 0.99
## Roc 0.25 0.67 0.54 0.19 0.35 0.25 0.23 0.23 1.00
## Rs 0.23 0.73 0.56 0.26 0.33 0.23 0.21 0.30 0.02
In the displayed matrix, the lower triangle shows the Jensen-Shannon Divergence (JSD) values, while the upper triangle presents the adjusted p-values for the statistical inference.
print(rescomp$entboot) # Rank entropy matrix with bootstrap## Equal Critic StdDev Gini Geometric Merec Mpsi Entropy Roc Rs
## Equal 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
## Critic 0.508 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
## StdDev 0.771 0.660 1.000 1.000 1.000 1.000 1.000 1.000 1.000
## Gini 0.739 0.625 0.679 1.000 1.000 1.000 1.000 1.000 1.000
## Geometric 0.972 0.547 0.769 0.660 1.000 1.000 1.000 1.000 1.000
## Merec 1.000 0.508 0.771 0.739 0.972 1.000 1.000 1.000 1.000
## Mpsi 0.993 0.444 0.755 0.734 0.959 0.993 1.000 1.000 1.000
## Entropy 0.761 0.631 0.710 0.985 0.681 0.761 0.754 1.000 1.000
## Roc 0.748 0.329 0.459 0.807 0.652 0.748 0.766 0.766 1.000
## Rs 0.767 0.270 0.436 0.741 0.673 0.767 0.788 0.698 0.981
This section delves into a more detailed sensitivity analysis by gradually modifying a specific set of weights and observing the impact on alternative rankings. This allows for a deeper understanding of how subtle changes in criterion importance can affect the final decision. The weisana function performs a gradual sensitivity analysis. Here, we are incrementally perturbing the critwei (Critic method weights) by a rp (relative perturbation) factor ranging from 0.01 to 0.5. In the output below: * gradweimat shows the modified weight sets generated during this gradual perturbation. * rankmat presents the corresponding alternative rankings for each of these modified weight sets. * topsisgrawei$sensitivity_table summarizes the sensitivity of each alternative’s rank to these gradual changes in weights, often showing how many times an alternative’s rank changed its position, or its average rank change.
mp <- list()
wp <- list(rp = seq(0.01, 0.5, 0.05))
topsisgrawei <- weisana(dmatrix = dmat, bcvec = bc, weights=critwei,
weimethod = "gradual", weipars = wp,
mcdamethod = topsis, methodpars = mp)str(topsisgrawei)## List of 4
## $ computing_time : 'difftime' num 0.06
## ..- attr(*, "units")= chr "secs"
## $ weights_matrix : num [1:101, 1:10] 0.0659 0.0652 0.0619 0.0586 0.0553 ...
## ..- attr(*, "dimnames")=List of 2
## .. ..$ : chr [1:101] "init" "C1-1%" "C1-6%" "C1-11%" ...
## .. ..$ : chr [1:10] "C1" "C2" "C3" "C4" ...
## $ ranking_matrix : num [1:101, 1:12] 3 3 3 3 3 3 3 3 3 3 ...
## ..- attr(*, "dimnames")=List of 2
## .. ..$ : chr [1:101] "init" "C1-1%" "C1-6%" "C1-11%" ...
## .. ..$ : chr [1:12] "G1" "G2" "G3" "G4" ...
## $ sensitivity_table:'data.frame': 17 obs. of 3 variables:
## ..$ Pattern: chr [1:17] "3,12,8,4,9,6,7,10,2,5,11,1" "3,12,7,4,9,6,8,10,2,5,11,1" "3,12,7,5,9,6,8,10,2,4,11,1" "3,12,8,4,9,5,7,10,2,6,11,1" ...
## ..$ Count : int [1:17] 71 1 4 2 4 5 1 1 1 3 ...
## ..$ Percent: num [1:17] 70.3 0.99 3.96 1.98 3.96 4.95 0.99 0.99 0.99 2.97 ...
gradweimat <- topsisgrawei$weights_matrix # Modified weights matrix
rankmat <- topsisgrawei$ranking_matrix # Ranking matrix
senstable <- topsisgrawei$sensitivity_table # Different rankings summary table
head(round(gradweimat, 3)) ## C1 C2 C3 C4 C5 C6 C7 C8 C9 C10
## init 0.066 0.051 0.057 0.090 0.180 0.079 0.032 0.273 0.046 0.125
## C1-1% 0.065 0.051 0.057 0.090 0.181 0.079 0.032 0.273 0.046 0.125
## C1-6% 0.062 0.051 0.058 0.090 0.181 0.080 0.032 0.274 0.046 0.126
## C1-11% 0.059 0.051 0.058 0.091 0.182 0.080 0.032 0.275 0.046 0.126
## C1-16% 0.055 0.051 0.058 0.091 0.182 0.080 0.032 0.276 0.047 0.127
## C1-21% 0.052 0.052 0.058 0.091 0.183 0.081 0.032 0.277 0.047 0.127
head(rankmat) # Rank matrix## G1 G2 G3 G4 G5 G6 G7 G8 G9 G10 G11 G12
## init 3 12 8 4 9 6 7 10 2 5 11 1
## C1-1% 3 12 8 4 9 6 7 10 2 5 11 1
## C1-6% 3 12 8 4 9 6 7 10 2 5 11 1
## C1-11% 3 12 8 4 9 6 7 10 2 5 11 1
## C1-16% 3 12 8 4 9 6 7 10 2 5 11 1
## C1-21% 3 12 8 4 9 6 7 10 2 5 11 1
print(senstable)## Pattern Count Percent
## Ranking_1 3,12,8,4,9,6,7,10,2,5,11,1 71 70.30
## Ranking_2 3,12,7,4,9,6,8,10,2,5,11,1 1 0.99
## Ranking_3 3,12,7,5,9,6,8,10,2,4,11,1 4 3.96
## Ranking_4 3,12,8,4,9,5,7,10,2,6,11,1 2 1.98
## Ranking_5 4,12,8,3,9,5,7,10,2,6,11,1 4 3.96
## Ranking_6 4,12,8,3,9,5,6,10,2,7,11,1 5 4.95
## Ranking_7 4,12,8,3,10,5,6,9,2,7,11,1 1 0.99
## Ranking_8 5,12,9,3,10,4,6,8,2,7,11,1 1 0.99
## Ranking_9 5,12,10,3,9,4,6,8,1,7,11,2 1 0.99
## Ranking_10 6,12,10,3,9,4,5,8,1,7,11,2 3 2.97
## Ranking_11 2,12,6,5,9,7,8,10,3,4,11,1 1 0.99
## Ranking_12 2,12,5,6,8,7,9,10,3,4,11,1 1 0.99
## Ranking_13 1,12,5,6,8,7,9,10,4,3,11,2 1 0.99
## Ranking_14 1,12,4,6,8,7,9,10,5,2,11,3 1 0.99
## Ranking_15 1,12,3,8,6,7,9,10,5,2,11,4 1 0.99
## Ranking_16 2,12,3,8,6,7,9,10,5,1,11,4 2 1.98
## Ranking_17 3,12,2,8,6,7,9,10,5,1,11,4 1 0.99
The table, labeled senstable, provides a detailed breakdown of the different ranking patterns observed during the sensitivity analysis where criterion weights were gradually modified.
Each row represents a unique ranking “Pattern” that emerged, showing the order of alternatives (e.g., 3,12,8,4,9,6,7,10,2,5,11,1 means Alternative 3 ranked first, Alternative 12 second, and so on).
Dominant Ranking Pattern: “Ranking_1” is overwhelmingly the most frequent outcome. This specific ranking pattern, 3,12,8,4,9,6,7,10,2,5,11,1, occurred 71 times, accounting for 70.30% of all observed rankings during the gradual weight perturbations. This indicates a high degree of stability for this particular ranking sequence, suggesting it is robust to a wide range of minor changes in criterion weights.
Diversity of Rankings: Despite the dominance of “Ranking_1”, a significant number of other unique ranking patterns were also observed (16 other distinct patterns in total). This implies that while one ranking is highly stable, the system is not entirely insensitive, and variations in weights can indeed lead to different orderings.
Low Frequency of Other Patterns: All other ranking patterns (from “Ranking_2” to “Ranking_17”) occurred with very low frequencies. Most of them appeared only once, representing a mere 0.99% of the total observations. Some appeared slightly more often, such as “Ranking_4” and “Ranking_5” (both 4 times, 3.96%), and “Ranking_7” (5 times, 4.95%). This demonstrates that while alternative rankings exist, they are relatively rare and less stable compared to the primary ranking.
In summary, the sensitivity analysis reveals that the ranking of alternatives is quite stable, with one specific order consistently emerging as the preferred outcome across various weight modifications. However, the presence of numerous other low-frequency ranking patterns underscores that the decision is not entirely immune to changes in criterion importance, and minor perturbations can, in some cases, lead to different alternative orderings. This information is crucial for decision-makers to assess the robustness of their final choice.
pca <- rankpca(rankmat, biplot=TRUE)## Warning in rankpca(rankmat, biplot = TRUE): Constant columns removed: G2, G11
This PCA biplot, derived from the rankmat (rankings under gradual weight modification), offers a visual summary of how the rankings evolve as the weights are gradually changed. It can highlight thresholds where small weight changes lead to significant shifts in rankings, or identify periods of high stability. The PCA biplot visualizes the relationships between the different states of gradually modified weights (represented by the blue labels like C1-46%, C1-41%, etc., where ‘C’ likely refers to a criterion and the percentage to its perturbation level) and the alternatives (represented by red text. The two dimensions, Dim1 and Dim2, capture 72.9% and 19.3% of the total variance, respectively, meaning they collectively explain a high proportion of the variability in the ranking data.
There’s a noticeable spread along Dim1, especially towards the right (e.g., C5-26% to C5-46%) and to the far left (C8-31% to C8-46%). This indicates that certain weight modifications, particularly those involving C5 and C8, lead to significantly different ranking outcomes compared to the central cluster.
Criterion 5 (C5): The labels like “C5-26%”, “C5-31%”, “C5-36%”, “C5-41%”, “C5-46%” are grouped on the far right side of Dim1. This suggests that as the weight of Criterion 5 is perturbed (especially at higher percentages like 26% to 46%), it drives the rankings to a distinct outcome, differing significantly from the rankings observed with other criterion perturbations. The vector for C5 (implied by the cluster of C5 points) would point towards the right.
Criterion 8 (C8): Similarly, the labels “C8-26%”, “C8-31%”, “C8-36%”, “C8-41%”, “C8-46%” are clustered on the far left side of Dim1, and some also spread vertically along Dim2. This indicates that perturbations of Criterion 8’s weight lead to another set of distinct rankings, significantly different from the C5-driven rankings and the central cluster.
The PCA biplot effectively visualizes the stability and sensitivity zones for the alternative rankings. It shows that while many gradual weight modifications result in similar rankings (the central cluster), certain criteria (notably C5 and C8, as indicated by their spread on the plot) have a strong influence. ## Sensitivity of Alternatives
Building on the gradual sensitivity analysis, this section provides a specific stability assessment of alternatives under these progressively modified weights
ressens2 <- sensana(rankmat)
print(ressens2$stabtable)## G1 G2 G3 G4 G5 G6 G7 G8 G9 G10 G11 G12
## SD 0.77 0 1.28 0.97 0.63 0.62 0.73 0.44 0.73 1.12 0 0.64
## CRV 0.25 0 0.17 0.24 0.07 0.11 0.10 0.04 0.34 0.22 0 0.54
## SD2 0.04 0 0.06 0.05 0.03 0.04 0.03 0.02 0.03 0.05 0 0.03
## SRSI 0.09 0 0.12 0.09 0.05 0.06 0.09 0.03 0.06 0.12 0 0.06
## RSI 0.95 1 0.92 0.96 0.97 0.97 0.96 0.98 0.96 0.94 1 0.97
The ressens2$stabtable output provides the stability of each alternative’s rank under the influence of the gradually modified weights. This table is crucial for identifying which alternatives are robust to minor perturbations in criterion importance and which ones are highly sensitive, potentially leading to different decisions based on slight variations in expert judgment or data.
Finally, we explore how to synthesize the diverse rankings obtained from different weighting methods into a single, more robust preference order. This is achieved through rank aggregation techniques.
The rankaggregate function combines the individual rankings from topsisranks into a single, aggregated ranking. This aggregation step is vital for deriving a more consensual and robust decision, especially when multiple legitimate weighting approaches yield different results.
prefrank displays the final aggregated ranking of the alternatives. This is often considered a more stable and reliable ranking as it mitigates the influence of any single weighting method.preftable provides a more detailed breakdown, showing how each alternative performed across the different aggregated rankings, potentially including information on how many times an alternative ranked in the top k positions.respref <- rankaggregate(t(topsisranks), topk=1, tiesmethod="average")
preftable <- respref$preference_table
prefrank <- respref$preference_ranking
print(prefrank)## G1 G2 G3 G4 G5 G6 G7 G8 G9 G10 G11 G12
## TOPK1 8.5 8.5 8.5 8.5 2.5 8.5 8.5 8.5 2.5 1 8.5 4.0
## RANKSUM 4.5 10.0 6.5 8.0 4.5 3.0 9.0 11.0 2.0 1 12.0 6.5
## MEDIAN 5.0 9.5 6.0 8.0 4.0 2.0 9.5 11.0 3.0 1 12.0 7.0
## BORDACNT 4.5 10.0 6.5 8.0 4.5 3.0 9.0 11.0 2.0 1 12.0 6.5
## COPELAND 5.0 9.5 6.5 8.0 4.0 3.0 9.5 11.0 2.0 1 12.0 6.5
## KEMYNG 5.0 9.5 6.5 8.0 4.0 3.0 9.5 11.0 2.0 1 12.0 6.5
## MARKOV 5.0 10.0 7.0 8.0 4.0 3.0 9.0 11.0 2.0 1 12.0 6.0
print(preftable)## Method Outranking
## 1 TOPK1 G10 > G5 = G9 > G12 > G1 = G2 = G3 = G4 = G6 = G7 = G8 = G11
## 2 RANKSUM G10 > G9 > G6 > G1 = G5 > G3 = G12 > G4 > G7 > G2 > G8 > G11
## 3 MEDIAN G10 > G6 > G9 > G5 > G1 > G3 > G12 > G4 > G2 = G7 > G8 > G11
## 4 BORDACNT G10 > G9 > G6 > G1 = G5 > G3 = G12 > G4 > G7 > G2 > G8 > G11
## 5 COPELAND G10 > G9 > G6 > G5 > G1 > G3 = G12 > G4 > G2 = G7 > G8 > G11
## 6 KEMYNG G10 > G9 > G6 > G5 > G1 > G3 = G12 > G4 > G2 = G7 > G8 > G11
## 7 MARKOV G10 > G9 > G6 > G5 > G1 > G12 > G3 > G4 > G7 > G2 > G8 > G11