Skip to main content
Log in

Robust winner determination in positional scoring rules with uncertain weights

  • Published:
Theory and Decision Aims and scope Submit manuscript

Abstract

Scoring rules constitute a particularly popular technique for aggregating a set of rankings. However, setting the weights associated with rank positions is a crucial task, as different instantiations of the weights can often lead to different winners. In this work we adopt minimax regret as a robust criterion for determining the winner in the presence of uncertainty over the weights. Focusing on two general settings (non-increasing weights and convex sequences of non-increasing weights) we provide a characterization of the minimax regret rule in terms of cumulative ranks, allowing a quick computation of the winner. We then analyze the properties of using minimax regret as a social choice function. Finally we provide some test cases of rank aggregation using the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. Part of this article is based on a conference paper (Viappiani 2018), where we presented some of the theoretical results (without proofs) and discussed the approach based on minimax regret by comparing with expected utility and other criteria. In this article we extend the analysis focusing on minimax regret and study its properties in the context of social choice.

  2. There is some redundancy in the constraints: it is enough to assume convexity and \(w_{m-1}\ge 0\) to ensure that the sequence is non-increasing.

  3. http://www.gurobi.com/

  4. See for instance the discussion on “Minimax and the objection from irrelevant alternatives” in Peterson’s book (see Peterson 2017, pag.53).

  5. Indeed Llamazares and Peña Llamazares and Peña (2009) remarks that winner determination should not be sensible to the ranks obtained by “inefficient” alternatives.

  6. http://www.preflib.org/.

  7. Several different versions have been adopted over the years; here we choose to compare only with respect to the point system used in 2010.

References

  • Baumeister, D., Roos, M., Rothe, J., Schend, L., & Xia, L. (2012). The possible winner problem with uncertain weights. In ECAI 2012—20th European conference on artificial intelligence. Montpellier, France, August 27–31 , 2012, pp 133–138. https://doi.org/10.3233/978-1-61499-098-7-133.

  • Bossert, W., & Suzumura, K. (2018). Positionalist voting rules: A general definition and axiomatic characterizations. Tech. rep. http://pages.videotron.com/wbossert/positionalist_dec18.pdf.

  • Boutilier, C., Patrascu, R., Poupart, P., & Schuurmans, D. (2006). Constraint-based optimization and utility elicitation using the minimax decision criterion. Artifical Intelligence, 170(8–9), 686–713.

    Article  Google Scholar 

  • Braziunas, D., & Boutilier, C. (2008). Elicitation of factored utilities. AI Magazine, 29(4), 79–92.

    Article  Google Scholar 

  • Cook, W. D., & Kress, M. (1990). A data envelopment model for aggregating preference rankings. Management Science, 36(11), 1302–1310.

    Article  Google Scholar 

  • Ehrgott, M. (2005). Multicriteria optimization (2nd ed.). Berlin: Springer. https://doi.org/10.1007/3-540-27659-9.

    Book  Google Scholar 

  • Fishburn, P., & Gehrlein, W. (1976). Borda’s rule, positional voting, and Condorcet’s simple majority principle. Public Choice, 28, 79–88.

    Article  Google Scholar 

  • Fishburn, P. C., & Vickson, R. G. (1978). Theoretical foundations of stochastic dominance. In G. A. Whitmore & M. C. Findlay (Eds.), Stochastic dominance (pp. 37–113). Lexington: D.C. Heath and Co.

    Google Scholar 

  • Foroughi, A., & Tamiz, M. (2005). An effective total ranking model for a ranked voting system. Omega, 33(6), 491–496. https://doi.org/10.1016/j.omega.2004.07.013.

    Article  Google Scholar 

  • French, S. (Ed.). (1986). Decision theory: An introduction to the mathematics of rationality. New York: Halsted Press.

    Google Scholar 

  • García-Lapresta, J. L., & Martínez-Panero, M. (2017). Positional voting rules generated by aggregation functions and the role of duplication. International Journal of Intelligent Systems, 32(9), 926–946. https://doi.org/10.1002/int.21877.

    Article  Google Scholar 

  • Goldsmith, J., Lang, J., Mattei, N., & Perny, P. (2014). Voting with rank dependent scoring rules. In Proceedings of the twenty-eighth AAAI conference on artificial intelligence, July 27–31, 2014, Québec City, Québec, Canada, pp. 698–704.

  • Green, R. H., Doyle, J. R., & Cook, W. D. (1996). Preference voting and project ranking using DEA and cross-evaluation. European Journal of Operational Research, 90(3), 461–472. https://doi.org/10.1016/0377-2217(95)00039-9.

    Article  Google Scholar 

  • Haghtalab, N., Noothigattu, R., & Procaccia, A. D. (2018). Weighted voting via no-regret learning. In Proceedings of the thirty-second AAAI conference on artificial intelligence, (AAAI-18), New Orleans, Louisiana, USA, February 2–7, 2018, pp. 1055–1062.

  • Hashimoto, A. (1997). A ranked voting system using a DEA/AR exclusion model: A note. European Journal of Operational Research, 97(3), 600–604. https://doi.org/10.1016/S0377-2217(96)00281-0.

    Article  Google Scholar 

  • Hazen, G. B. (1986). Partial information, dominance, and potential optimality in multiattribute utility theory. Operations Research, 34(2), 296–310. https://doi.org/10.1287/opre.34.2.296.

    Article  Google Scholar 

  • Khodabakhshi, M., & Aryavash, K. (2015). Aggregating preference rankings using an optimistic-pessimistic approach. Computers and Industrial Engineering, 85, 13–16. https://doi.org/10.1016/j.cie.2015.02.030.

    Article  Google Scholar 

  • Konczak, K., & Lang, J. (2005). Voting procedures with incomplete preferences. In Proceedings of IJCAI’05 multidisciplinary workshop on advances in preference handling, Edinburgh, Scotland, UK.

  • Kouvelis, P., & Yu, G. (1997). Robust discrete optimization and its applications. Dordrecht: Kluwer.

    Book  Google Scholar 

  • Llamazares, B. (2016). Ranking candidates through convex sequences of variable weights. Group Decision and Negotiation, 25, 567–584. https://doi.org/10.1007/s10726-015-9452-8.

    Article  Google Scholar 

  • Llamazares, B., & Peña, T. (2009). Preference aggregation and DEA: An analysis of the methods proposed to discriminate efficient candidates. European Journal of Operational Research, 197(2), 714–721. https://doi.org/10.1016/j.ejor.2008.06.031.

    Article  Google Scholar 

  • Llamazares, B., & Peña, T. (2013). Aggregating preferences rankings with variable weights. European Journal of Operational Research, 230(2), 348–355. https://doi.org/10.1016/j.ejor.2013.04.013.

    Article  Google Scholar 

  • Llamazares, B., & Peña, T. (2015a). Positional voting systems generated by cumulative standings functions. Group Decision and Negotiation, 24(5), 777–801. https://doi.org/10.1007/s10726-014-9412-8.

    Article  Google Scholar 

  • Llamazares, B., & Peña, T. (2015b). Scoring rules and social choice properties: Some characterizations. Theory and Decision, 78(3), 429–450. https://doi.org/10.1007/s11238-014-9429-0.

    Article  Google Scholar 

  • Lu, T., & Boutilier, C. (2011). Robust approximation and incremental elicitation in voting protocols. In Proceedings of IJCAI 2011, pp. 287–293.

  • Merlin, V. (2003). The axiomatic characterizations of majority voting and scoring rules. Mathématiques et Sciences Humaines Mathematics and Social Sciences (163)

  • Pearman, A. D. (1993). Establishing dominance in multiattribute decision making using an ordered metric method. Journal of the Operational Research Society, 44(5), 461–469.

    Article  Google Scholar 

  • Peterson, M. (2017). An introduction to decision theory: Cambridge introductions to philosophy (2nd ed.). Cambridge: Cambridge University Press. https://doi.org/10.1017/9781316585061.

    Book  Google Scholar 

  • Procaccia, A. D., Zohar, A., Peleg, Y., & Rosenschein, J. S. (2009). The learnability of voting rules. Artificial Intelligence, 173(12–13), 1133–1149. https://doi.org/10.1016/j.artint.2009.03.003.

    Article  Google Scholar 

  • Salo, A. (1995). Interactive decision aiding for group decision support. European Journal of Operational Research, 84, 134–149. https://doi.org/10.1016/0377-2217(94)00322-4.

    Article  Google Scholar 

  • Salo, A., & Hämäläinen, R. P. (2001). Preference ratios in multiattribute evaluation (PRIME)-elicitation and decision procedures under incomplete information. IEEE Trans on Systems, Man and Cybernetics, 31(6), 533–545.

    Article  Google Scholar 

  • Savage, L. J. (1954). The foundations of statistics. New York: Wiley.

    Google Scholar 

  • Stein, W. E., Mizzi, P. J., & Pfaffenberger, R. C. (1994). A stochastic dominance analysis of ranked voting systems with scoring. European Journal of Operational Research, 74(1), 78–85. https://doi.org/10.1016/0377-2217(94)90205-4.

    Article  Google Scholar 

  • Viappiani, P. (2018). Positional scoring rules with uncertain weights. In Proceedings of scalable uncertainty management—12th international conference, SUM 2018, Milan, Italy, October 3–5, 2018, pp. 306–320.

  • Weber, M. (1987). Decision making with incomplete information. European Journal of Operational Research, 28(1), 44–57. https://doi.org/10.1016/0377-2217(87)90168-8.

    Article  Google Scholar 

  • Xia, L., & Conitzer, V. (2011). Determining possible and necessary winners given partial orders. Journal of Artificial Intelligence Research, 41, 25–67.

    Article  Google Scholar 

  • Young, H. P. (1974). An axiomatization of Borda’s rule. Journal of Economic Theory, 9, 43–52.

    Article  Google Scholar 

  • Young, H. P. (1975). Social choice scoring functions. SIAM Journal on Applied Mathematics, 28(4), 824–838. http://www.jstor.org/stable/2100365.

  • Zwicker, W. S. (2016). Introduction to the theory of voting. In F. Brandt, V. Conitzer, U. Endriss, J. Lang, & A. D. Procaccia (Eds.), Handbook of computational social choice (pp. 23–56). Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9781107446984.003.

    Chapter  Google Scholar 

Download references

Acknowledgements

This work was partially supported by the ANR project Cocorico-CoDec. The author thanks two anonymous reviewers for helpful comments. Moreover, the author would like to thank Jerome Lang for several comments on an early version of this paper, Stefano Moretti for pointing out some typos, and Patrice Perny for discussion on dominance relations and possible winners.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paolo Viappiani.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A Appendix: Proofs

A Appendix: Proofs

We omit the proof of Proposition 3 that is trivial.

Proposition 4

Let \(x, y \in A\). The following statements connect max regret and dominance relations:

  1. 1.

    For an arbitrary set W of scoring vectors:

    If x weakly dominates y, then \({{\,\mathrm{MR}\,}}(x; W) \le {{\,\mathrm{MR}\,}}(y; W)\);

    if x strongly dominates y, then \({{\,\mathrm{MR}\,}}(x; W) < {{\,\mathrm{MR}\,}}(y; W)\).

  2. 2.

    Moreover, when considering decreasing weights, i.e. the scoring vector belongs to \(W^{D}\):

    If \(V^{x} \succeq V^{y}\) then \({{\,\mathrm{MR}\,}}(x; W^{D}) \le {{\,\mathrm{MR}\,}}(y; W^{D})\);

    if \(V^{x} \succ V^{y}\) then \({{\,\mathrm{MR}\,}}(x; W^{D}) < {{\,\mathrm{MR}\,}}(y; W^{D})\).

  3. 3.

    Finally, when considering convex weights, i.e. the scoring vector belongs to \(W^{C}\):

    If \({\mathcal {V}}^{x} \succeq {\mathcal {V}}^{y}\) then \({{\,\mathrm{MR}\,}}(x; W^{C}) \le {{\,\mathrm{MR}\,}}(y; W^{C})\);

    if \({\mathcal {V}}^{x} \succ {\mathcal {V}}^{y}\) then \({{\,\mathrm{MR}\,}}(x; W^{C}) < {{\,\mathrm{MR}\,}}(y; W^{C})\).

Proof

1) If x weakly dominates y, by definition \(\forall w \in W \;\; s(x; w) \ge s(y; w)\) (Definition 3). For any \(z \in A\), we have that:

$$\begin{aligned} {{\,\mathrm{PMR}\,}}(x, z; W) =&\max _{w \in W} \{ s(z; w) - s(x; w) \} \le \max _{w \in W} \{ s(z; w)\nonumber \\&- s(y; w) \} = {{\,\mathrm{PMR}\,}}(y, z;W). \end{aligned}$$
(31)

Therefore \({{\,\mathrm{MR}\,}}(x; W) = \max _{z \in A} {{\,\mathrm{PMR}\,}}(x, z; W) \le \max _{z \in A} {{\,\mathrm{PMR}\,}}(y, z;W) = {{\,\mathrm{MR}\,}}(y;W)\).

2) The result follows directly from part 1 of this proposition and Proposition 1

3) The result follows directly from part 1 of this proposition and Proposition 2. \(\square \)

Proposition 5

The followings statements hold:

  1. 1.

    For any \(x \in S^{*}(W)\), x is weakly undominated, i.e. there is no \(y \in A\) such that y strongly dominates x.

  2. 2.

    For all \(x \in S^{*}(W)\), there is no \(y \in A\setminus S^{*}(W)\) such that y weakly dominates x.

Proof

  1. 1)

    Let \(x \in S^{*}(W)\), by definition \(x \in \arg \min _{z \in A} {{\,\mathrm{MR}\,}}(z; W)\). Now assume y strongly dominates x. Then, by Proposition 4, we have \({{\,\mathrm{MR}\,}}(y;W) < {{\,\mathrm{MR}\,}}(x;W)\), but this is absurd.

  2. 2)

    Let \(x \in S^{*}(W)\). Now assume \(y \in A\setminus S^{*}(W)\) and that y weakly dominates x. Then, by Proposition 4, we have \({{\,\mathrm{MR}\,}}(y;W) \le {{\,\mathrm{MR}\,}}(x;W)\) and, therefore, \({{\,\mathrm{MR}\,}}(y;W)={{\,\mathrm{MMR}\,}}(W)\), meaning \(y \in S^{*}(W)\) . We obtain a contradiction. \(\square \)

Proposition 6

Let \(x \in A\). The following statements hold:

  1. 1.

    Alternative x is a necessary co-winner if and only if \({{\,\mathrm{MR}\,}}(x;W)=0\).

  2. 2.

    If x is a necessary winner then \({{\,\mathrm{MR}\,}}(x;W)=0\) and \(S^{*}(W)=\{x\}\).

  3. 3.

    Alternative x is a necessary winner if and only if \({{\,\mathrm{PMR}\,}}(x,z;W)<0\) for all \(z \in A {\setminus } \{x\}\).

Proof

  1. 1.

    x is a necessary co-winner \(\iff \)\(s(x;w) \ge s(z;w) \;\; \forall z \in A, \forall w \in W\)\(\iff \)\({{\,\mathrm{PMR}\,}}(x, z; W) \le 0\) for all \(z \in A\)\(\iff \)\({{\,\mathrm{MR}\,}}(x; W) = 0\).

  2. 2.

    If x is a necessary winner then it is also a necessary co-winner, and part 1 of this proposition implies \({{\,\mathrm{MR}\,}}(x;W)=0\), and \(x \in S^{*}(W)\). Let \(y \in A {\setminus }\{x\}\). Since x is a necessary winner, \(s(x;w)>s(y;w)\) for all \(w \in W\); therefore, \({{\,\mathrm{PMR}\,}}(y,x;W) > 0\) and \({{\,\mathrm{MR}\,}}(y; W) \ge {{\,\mathrm{PMR}\,}}(y,x;W) > 0\). Hence \(S^{*}(W)=\{x\}\).

  3. 3.

    x is a necessary winner \(\iff \)\(s(x;w) > s(z;w) \;\; \forall z \in A {\setminus } \{x\}, \forall w \in W\)\(\iff \)\({{\,\mathrm{PMR}\,}}(x, z; W) < 0\) for all \(z \in A {\setminus } \{x\}\)

\(\square \)

Lemma 1

In the case of decreasing weights, for any pairs \(x, y \in A\), it holds

$$\begin{aligned} {{\,\mathrm{PMR}\,}}(x,y; W^{D}) = \max _{j \in [\![m-1 ]\!]} \{ V^y_j - V^x_j \}. \end{aligned}$$
(21)

Proof

By noticing that the optimal solution of a bounded linear program is attained in one of the vertices, we derive

$$\begin{aligned} {{\,\mathrm{PMR}\,}}(x,y; W^D)&= \max \left\{ \sum _{j=1}^{m} [w_j v^y_j - w_j v^x_j] \Big | 1 = w_1 \ge w_{2} \ge \cdots \ge w_{m-1} \ge w_m = 0 \right\} \nonumber \\&= \max \left\{ \sum _{j=1}^{m-1} \delta _j ( V^y_j - V^x_j) \Big | \delta _1 \ge 0, \ldots , \delta _{m-1} \ge 0 \wedge \sum _{j=1}^{m-1} \delta _j = 1 \right\} \nonumber \\&=\max \left\{ \sum _{j=1}^{m-1} \delta _j ( V^y_j - V^x_j) \Big | \delta \in \{ \text {oneat(1)},\ldots ,\text {oneat(m-1)} \} \right\} \nonumber \\&= \max _{j \in [\![m-1 ]\!]} \{ V^y_j - V^x_j \} \end{aligned}$$
(32)

where \(\text {oneat(j)}\) is the vector with 0 everywhere except in position j where the value is 1. \(\square \)

Theorem 1

For any alternative \(x \in A\) it holds

$$\begin{aligned} {{\,\mathrm{MR}\,}}(x; W^D) = \max _{j \in [\![m-1 ]\!]} \{ V^*_j - V^x_j \} \end{aligned}$$
(22)

where \(V^*_k = \max _{x \in A} V^x_k\).

Proof

We use Lemma 1 and substitute Eq. (21) into the formula of \({{\,\mathrm{MR}\,}}(x)\) given by Eq. (17):

$$\begin{aligned} {{\,\mathrm{MR}\,}}(x; W^D)= & {} \max _{y \in A} \max _{j \in [\![m-1 ]\!]} \{ V^y_j - V^x_j \} = \max _{j \in [\![m-1 ]\!]} \max _{y \in A} \{V^y_j - V^x_j\} \\= & {} \max _{j \in [\![m-1 ]\!]} \{ \max _{y \in A} \{V^y_j \}- V^x_j\} = \max _{j \in [\![m-1 ]\!]} \{V^*_j - V^x_j\}. \end{aligned}$$

\(\square \)

Lemma 2

Assuming convex weights, the pairwise max regret of an alternative x against y can be computed as follows:

$$\begin{aligned} {{\,\mathrm{PMR}\,}}(x,y; W^C) = \max _{j \in [\![m-1 ]\!]} \left\{ \frac{{\mathcal {V}}^y_j - {\mathcal {V}}^x_j}{j} \right\} . \end{aligned}$$
(23)

Proof

The proof is similar to that of Lemma 1. The pairwise max regret can be computed using the following linear program:

$$\begin{aligned} {{\,\mathrm{PMR}\,}}(x,y; W^C) = \max \;\;\;&\sum _{j=1}^{m-1} \phi _j ({\mathcal {V}}^y_j - {\mathcal {V}}^x_j) \end{aligned}$$
(33)
$$\begin{aligned} \text {s.t. } \;\;\;&\sum _{j=1}^{m-1} j \, \phi _{j} = 1 \end{aligned}$$
(34)
$$\begin{aligned}&\phi _1 \ge 0, \ldots , \phi _{m-1} \ge 0 \end{aligned}$$
(35)

Let \(\text {oneat(j)}\) be the vector with 0 everywhere except in position j where the value is 1.

We know from the theory of linear programming that the optimum is attained in one of the vertices. Therefore, the optimal \(\phi \) must be of the type \(j \cdot \text {oneat}(j)\), that is \((1,0,\ldots ,0)\), \((0,0.5,0,\ldots ,0)\), ..., \((0,\ldots ,0, \frac{1}{m-1})\). The corresponding optimal \(\delta \) is among \((1,0,\ldots ,0)\), \((\frac{1}{2}, \frac{1}{2}, 0 , \ldots , 0)\), \((\frac{1}{3}, \frac{1}{3}, \frac{1}{3}, 0, \ldots , 0)\), etc.

$$\begin{aligned} {{\,\mathrm{PMR}\,}}(x,y; W^C)&= \max \left\{ \sum _{j=1}^{m-1} \phi _{j} ( {\mathcal {V}}^y_j - {\mathcal {V}}^x_j) \, \Big | \, \phi \in \left\{ \frac{ \text {oneat(j)} }{j} \right\} _{j=1}^{m-1} \right\} \nonumber \\&= \max _{j \in [\![m-1 ]\!]} \frac{{\mathcal {V}}^y_j - {\mathcal {V}}^x_j}{j}. \end{aligned}$$
(36)

\(\square \)

Theorem 2

Assuming convex weights, the max regret of alternative x can be computed as follows:

$$\begin{aligned} {{\,\mathrm{MR}\,}}(x; W^C) = \max _{j \in [\![m-1 ]\!]} \Big \{ \frac{{\mathcal {V}}^*_j - {\mathcal {V}}^x_j}{j} \Big \} \end{aligned}$$
(24)

where \({\mathcal {V}}^*_j = \max _{x \in A} {\mathcal {V}}^x_j\).

Proof

The result follows by using Lemma 2 in order to substitute Eqs. (23) in (17); we then exchange the order of the two \(\max \).

$$\begin{aligned} {{\,\mathrm{MR}\,}}(x; W^C)= & {} \max _{y \in A} \max _{j \in [\![m-1 ]\!]} \left\{ \frac{{\mathcal {V}}^y_j - {\mathcal {V}}^x_j}{j} \right\} = \max _{j \in [\![m-1 ]\!]} \max _{y \in A} \left\{ \frac{{\mathcal {V}}^y_j - {\mathcal {V}}^x_j}{j} \right\} \\= & {} \max _{j \in [\![m-1 ]\!]} \left\{ \frac{{\mathcal {V}}^*_j - {\mathcal {V}}^x_j}{j} \right\} . \end{aligned}$$

\(\square \)

Proposition 7

For any pair of alternatives \(x, y \in A\), it holds

$$\begin{aligned} {{\,\mathrm{PMR}\,}}(x,y; W^{D,t}) = \left[ \left( 1 - \sum _{j=1}^{m-1} t_{j} \right) \max _{j\in [\![m-1 ]\!]} \{ V^y_j - V^x_j \} \right] + \sum _{j=1}^{m-1} t_{j} (V^y_j - V^x_j ) \end{aligned}$$
(26)

or, equivalently,

$$\begin{aligned} {{\,\mathrm{PMR}\,}}(x,y; W^{D,t}) = \left[ \left( 1 - \sum _{j=1}^{m-1} t_{j} \right) {{\,\mathrm{PMR}\,}}(x,y; W^{D}) \right] + \sum _{j=1}^{m-1} t_{j} (V^y_j - V^x_j) \end{aligned}$$

where \({{\,\mathrm{PMR}\,}}(x,y; W^{D})\) is pairwise max regret computed with no discriminative thresholds.

Proof

We let \({\hat{\delta }}_{j}\) to be the slack between the value of \(\delta _{j}\) and its discriminative value \(t_{j}\):

$$\begin{aligned} \hat{\delta _j} = w_j - w_{j+1} - t_{j} = \delta _j - t_{j} \end{aligned}$$
(37)

for \(j \in [\![m-1 ]\!]\). Equation (25) ensures that \(\hat{\delta _j} \ge 0\). Consider the difference of score between two alternatives y and x:

$$\begin{aligned} s(y; w)-s(x; w)&= \sum _{j=1}^{m-1} {\hat{\delta }}_{j} (V^{y}_{j} - V^{x}_{j}) + \sum _{j=1}^{m-1} t_{j} (V^{y}_{j} - V^{x}_{j}) \end{aligned}$$
(38)

We now consider the maximum \(s(y; w)-s(x; w)\) when picking a scoring rule whose scoring vector w belongs to \(W^{D,t}\) (weakly decreasing and with discriminative values). Note that, the second addendum on the right-hand side of Eq. (38) can be viewed as constant.

$$\begin{aligned}&\max _{w \in W^{D,t}} s(y; w)-s(x; w) \nonumber \\&\quad = \max \left\{ \sum _{j=1}^{m-1} \delta _j (V^y_j - V^x_j) \mid 0 \le \delta _j \le 1 \wedge \sum _1^{m-1} \delta _j = 1 \wedge \delta _j \ge t_j \,\cdot \, \forall j \in [\![m-1 ]\!]\right\} \end{aligned}$$
(39)
$$\begin{aligned}&\quad = \sum _{j=1}^{m-1} t_{j} (V^y_j - V^x_j ) + \max \left\{ \sum _{j=1}^{m-1} \hat{\delta _j} (V^y_j - V^x_j) \mid 0 \le {\hat{\delta }}_j \le 1 \wedge \sum _1^{m-1} \hat{\delta _j} \right. \nonumber \\&\left. \quad = 1-\sum _{j=1}^{m-1} t_{j} \,\cdot \, \forall j \in [\![m-1 ]\!]\right\} \end{aligned}$$
(40)
$$\begin{aligned}&\quad = \sum _{j=1}^{m-1} t_{j} (V^y_j - V^x_j ) + \left( 1-\sum _{j=1}^{m-1} t_{j} \right) \max _{j \in [\![m-1 ]\!]} (V^y_j - V^x_j). \end{aligned}$$
(41)

\(\square \)

Proposition 8

Let p be the profile.

  • When an alternative \(x \in A\) is ranked first at least \(\frac{2}{3} n\) times, then \({{\,\mathrm{MMR\hbox {-}Dec}\,}}(p)=\{x\}\).

    In other words, if x is such that \(v_{1}^{x} > \frac{2}{3} n\) then \(S ^{*}(W^{D})=\{x\}\)

  • When an alternative \(x \in A\) is ranked first at least \(\frac{2m-2}{3m-2} n\) times, then \({{\,\mathrm{MMR\hbox {-}Con}\,}}(p)=\{x\}\).

    In other words, if x is such that \(v_{1}^{x} > \frac{2m-2}{3m-2} n\), then \(S^{*}(W^{C})=\{x\}\).

Proof

  1. 1.

    Decreasing weights\(W^{D}\): Consider a profile p in which x is such that \(v^{x}=(a,0,\ldots ,0,n-a)\) and \(v^{y}=(n-a,a,0,\ldots ,0)\) and \(a > \frac{n}{2}\) (x is ranked first at least a times and is last all other times; y is first \(n-a\) times and second a times). Then \(V^{x}=(a,\ldots ,a)\) and \(V^{y}=(n-a,n,\ldots ,n)\). According to Lemma 1, \({{\,\mathrm{PMR}\,}}(x,y; W^{D})= V^{y}_{2} - V^{x}_{2} = n-a\) and \({{\,\mathrm{PMR}\,}}(y,x; W^{D})= V^{x}_{1} - V^{y}_{1}=2a-n\). It can be shown that in this profile \({{\,\mathrm{MR}\,}}(x; W^{D})={{\,\mathrm{PMR}\,}}(x,y; W^{D})\) and \({{\,\mathrm{MR}\,}}(y; W^{D})={{\,\mathrm{PMR}\,}}(y, x; W^{D})\) since every alternative other than x and y is (weakly) dominated. Therefore, x is a winner if

    $$\begin{aligned} {{\,\mathrm{MR}\,}}(x; W^{D})< {{\,\mathrm{MR}\,}}(y; W^{D}) \iff n-a < 2a-n \iff a > \frac{2}{3}n. \end{aligned}$$

    The last step is to show that the profile p is the most challenging situation to alternative x: in any profile \(p'\) in which x is ranked first at least a times, x will be at least as well ranked as in p, and any challenger will be at most ranked as well as y in p. Indeed monotonicity guarantees us that the max regret of x will be minimum.

  2. 2.

    Convex weights\(W^{C}\): Consider a profile p in which x is such that \(v^{x}=(a,0,\ldots ,0,n-a)\) and \(v^{y}=(n-a,a,0,\ldots ,0)\) and \(a > \frac{n}{2}\) (x is ranked first at least a times and is last all other times; y is first \(n-a\) times and second a times). The cumulative ranks of x and y are \(V^{x}=(a,\ldots ,a)\) and \(V^{y}=(n-a,n,\ldots ,n)\); the double cumulative ranks \({\mathcal {V}}^{x}\) and \({\mathcal {V}}^{y}\) are such that that \({\mathcal {V}}_{j}^{x} = j a\) and \({\mathcal {V}}_{j}^{y}=jn-a\), with \(j \in [\![m-1 ]\!]\). We impose that the max regret of x is lower than that of y, and then use Lemma 1:

    $$\begin{aligned}&{{\,\mathrm{MR}\,}}(x; W^{C})< {{\,\mathrm{MR}\,}}(y; W^{C}) \\&\quad \iff {{\,\mathrm{PMR}\,}}(x,y; W^{C})< {{\,\mathrm{PMR}\,}}(y,x; W^{C}) \\&\quad \iff \max _{j \in [\![m-1 ]\!]} \left\{ \frac{{\mathcal {V}}^{y}_{j} - {\mathcal {V}}^{x}_{j}}{j} \right\}< \max _{j \in [\![m-1 ]\!]} \left\{ \frac{{\mathcal {V}}^{x}_{j} - {\mathcal {V}}^{y}_{j}}{j} \right\} \\&\quad \iff \underbrace{n- \frac{a}{m-1} }_{\frac{{\mathcal {V}}^{y}_{m-1}}{m-1}} - \underbrace{a}_{\frac{{\mathcal {V}}^{x}_{m-1}}{m-1}} < \underbrace{a}_{\frac{{\mathcal {V}}^{x}_{1}}{1}} - \underbrace{(n-a)}_{\frac{{\mathcal {V}}^{y}_{1}}{1}} \iff a > \frac{2 (m-1)}{3m-2} n. \end{aligned}$$

    Notice that alternative x is the “worst ranked” among all possible rank distribution satisfying the above condition and y is ranked as well as possible (since we filled up the profile in the most disadvantageous way to x). Now to conclude the proof, as in the previous case, we make use of monotonicity to prove the argument that in any other profile where alternative x satisfies the above condition on the number of first positions, x will achieve minimax regret.

\(\square \)

Theorem 3

The social choice functions \({{\,\mathrm{MMR\hbox {-}Dec}\,}}\) and \({{\,\mathrm{MMR\hbox {-}Con}\,}}\) satisfy anonymity, neutrality, unanimity, monotonicity, IRDA, homogeneity and independence from symmetric profiles.

Proof

Neutrality and anonymity are trivial to check. Proof sketches for homogeneity and independence from symmetric profiles were given in the main text.

For unanimity, we need to show that if all voters place an alternative x in first position, then x will have a max regret value of zero and, therefore, x will be declared the winner. x has rank distribution \(v^x=(n,0,\ldots ,0)\) and cumulative standings \(V^x=(n,\ldots ,n)\). All other alternatives \(y \ne x\) have 0 in the first component of their cumulative ranks, \(V^{y}_{1}=0\), and a value less than n in the other components. Using Theorem 1, \({{\,\mathrm{MR}\,}}(y; W^{D})=n\), for any y, while \({{\,\mathrm{MR}\,}}(x; W^{D})=0\); therefore, x is the only winner. The proof is similar for \(W^{C}\).

We now prove monotonicity. Indeed, assume that, starting from a profile p, we modify the ranking associated with one of the voters so that an alternative x moves from some position \(i_2\) to some position \(i_1<i_2\) and call the resulting profile \(p'\). Then the rank distribution of the new profile is such that \(v^{x}_{i_1}[p']=v^{x}_{i_1}[p]+1\) and \(v^{x}_{i_2}[p']=v^{x}_{i_2}[p]-1\). It follows that the cumulative rank distribution \(V^x[p']\) of alternative x in the new profile \(p'\) is such that

$$\begin{aligned} V^{x}_j[p'] = {\left\{ \begin{array}{ll} V^x_j[p] &{} j=1,\ldots ,i_1-1 \\ V^x_j[p]+1 &{} j=i_{1},\ldots ,i_2-1\\ V^x_j[p] &{} j=i_2,\ldots ,m \\ \end{array}\right. } \end{aligned}$$

We, therefore, have \(V^{x}[p'] \succeq V^{x}[p]\). We also have \(V^{y}[p'] \preceq V^{y}[p]\) for any \(y \ne x\) since y can either have the same ranks as in \(p'\) or may have lose a position.

Now, let \(J = \{ j \in [\![m-1 ]\!] | x \in \arg \max _{z} V^{z}_{j}[p'] \}\), that is the set of positions for which x has maximum cumulative rank in \(p'\). We have (using Theorem 1)

$$\begin{aligned} {{\,\mathrm{MR}\,}}_{p'}(x; W^{D})&= \max _{j=1}^{m-1} \{ V^{*}_{j}[p'] - V_{j}^{x}[p'] \} = \max _{j \not \in J} \{ \max _{y \in A - \{x\}} \{ V^{y}_{j}[p'] \} - V_{j}^{x}[p'] \} \le \\&\le \max _{j \not \in J} \{ \max _{y \in A - \{x\}} \{ V^{y}_{j}[p] \} - V_{j}^{x}[p] \} = \max _{j=1}^{m-1} \{ V^{*}_{j}[p] - V_{j}^{x}[p] \} \\&= {{\,\mathrm{MR}\,}}_{p}(x; W^{D}). \end{aligned}$$

and this means regret of x in \(p'\) cannot be higher than the regret of x in p. Assume y is any alternative that is not a winner in p, meaning that \({{\,\mathrm{MR}\,}}_{p}(x; W^{D}) < {{\,\mathrm{MR}\,}}_{p}(y;W^{D})\). It follows

$$\begin{aligned} {{\,\mathrm{MR}\,}}_{p'}(x;W^{D}) \le {{\,\mathrm{MR}\,}}_{p}(x; W) < {{\,\mathrm{MR}\,}}_{p}(y;W^{D}) \le {{\,\mathrm{MR}\,}}_{p'}(y;W^{D}). \end{aligned}$$

Therefore, if x is a winner in p, it has to continue to have minimax regret, and, therefore, to be a winner in \(p'\); we conclude that the rule is monotone in \(W^{D}\). Analogous reasoning can be made for \(W^{C}\).

To prove IRDA note that, according to Eq. (22), the max regret of an alternative depends only on its rank distribution, and on the values \(V^{*}_{1},\ldots ,V^{*}_{m-1}\). But, for any j, \(V^{*}_{j} = \max _{y} V^{y}_{j}\) and the associated alternatives in \(\arg \max _{y} V^{y}_{j}\) have maximal cumulative rank in the j-th position, meaning that are either undominated or dominated by another alternative that is also in \(\arg \max _{y} V^{y}_{j}\) (said in another way, for each j, there is at least 1 undominated alternative in \(\arg \max _{y} V^{y}_{j}\)).

Thus any change in the rank of dominated alternatives does not change the values \(V^{*}_{1},\ldots ,V^{*}_{m-1}\), when moving from p to \(p'\). This means that by changing the ranks of dominated alternatives will not change \({{\,\mathrm{MR}\,}}(x; W^{D})\), and, therefore, the winners according to \({{\,\mathrm{MMR}\,}}\) will be the same, and, therefore, \({{\,\mathrm{MMR}\,}}\) is IRDA. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Viappiani, P. Robust winner determination in positional scoring rules with uncertain weights. Theory Decis 88, 323–367 (2020). https://doi.org/10.1007/s11238-019-09734-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11238-019-09734-3

Keywords

Navigation