Why Not Ranking?

Why Not Ranking?

May 31, 2016

For centuries, the study of voting (social choice) has focused on voters' rankings of the candidates. "If only we knew each voter's ranking," the theorists thought, "then we could determine the correct winner." While each of the great social choice theorists was inventing a different method to determine the winner from the rankings (or proving that a perfect system was impossible), they all missed the fact that a ranking from each voter is not always enough information to choose the correct winner. Consider this election:


49%: A > C > B

49%: B > C > A

2%: C > B > A


Candidate C is the favorite of 2% of the voters, and is the second choice of everyone else, with the rest of the electorate evenly split between A and B. Is C a good compromise candidate?


Frankly, we don't have enough information to know if C is a good compromise or not. Yes, this looks like a nearly-tied election, balanced on a knife's edge between A, B, and C, but perhaps it wouldn't be if we could get more information from the voters. Rankings allow voters to indicate which candidate they prefer over another, but prevent them from indicating the strengths of those preferences. Here’s a simple question we could ask to get some idea of the preference strengths:


Is your second choice a close second (closer to your first choice) or a distant second (closer to your last choice)?


If everyone who put C second considers him a close second, then he's clearly an excellent compromise candidate. But if everyone answers that he's a distant second, then he's clearly not. So the answer is to get more information from the voters. We must ask the voters to evaluate the candidates—not just rank them.


Many voting theorists have resisted asking for more than a ranking, with economics-based reasoning: utilities are not comparable between people. And it's true! As one of my kids said to the other, "You can't know if my hurt is worse than yours!" But no economist would bat an eye at asking one of the A voters above whether they'd prefer a coin flip between A and B winning or C winning outright. (This is the economist-friendly version of the question above.)


In fact, most economists would be perfectly willing to get more specific and ask voters whether they’d prefer C winning outright or a lottery where A would win 3 out of 10 times and B would win 7. They’d adjust those numbers until they obtained a fine-grained opinion about C from each voter. In layperson terms, this is equivalent to asking each voter where C falls on a line between A and B, to an accuracy of 10 percent. Behavioral economists may object if the scale were too fine, but most would agree that the results have meaning at a coarse level. Once we see that economists are willing to ask detailed probability questions, their argument against asking for anything more than a ranking from voters falls apart.


So while the economists are correct that comparing preference strengths across individuals is futile, it is still true that for each voter, the scale between their most and least favored candidates is quantitatively meaningful and should be considered in deciding the election outcome.


Now back to our initial election. It is instructive to take this election and see what winner emerges with each of the rank-based social choice methods:


  • Condorcet: It's a close election, but C emerges as the beats-all (Condorcet) winner.

  • Borda Count: Again a close election, with C pulling out a win.

  • Bucklin: The first round is close, but nobody has a majority, so second place votes are added as approvals. Then, C wins in a landslide.

  • Instant runoff: C is eliminated immediately, laughed off the stage with such small 1st-place support. B barely beats A in the second round.


So the standard social choice methods disagree widely on this election. Some declare C a close winner, another eliminates C with prejudice, and yet another has C winning in a landslide.


For completeness we note that with choose-one-plurality, the election is a toss-up between A and B. C doesn’t stand a chance of winning but likely plays the role of spoiler between the other two! Top-two runoff would, like Instant Runoff, have C eliminated decisively and B pulling out a narrow win in the second round.


Some would claim that an election with only three candidates unfairly penalizes rating systems, and that ratings become more informative when there are more candidates. Donald Saari, a prominent supporter of the Borda Count, asserts that the best way to determine the strength of a voter’s preference between A and B is counting the number of candidates the voter places between them. In reality, however, it seems that more candidates can serve to confuse things further. Even if I can rank five restaurants, may I not consider the quality of the fourth to be much closer to the first than the last?


Asking for more information from each voter—an evaluation of each candidate—is what allows us to escape this confused state of affairs. That's why the Center for Election Science promotes rating-based (evaluative) methods instead of ranking-based ones. We need to know from each voter not just who their second-choice candidate is, but how good they think that candidate is.


And who would win the election above with the evaluative methods, Approval Voting, Score Voting, or Majority Judgement? Actually, it is impossible to tell, since we don’t know how the voters would translate their rankings into evaluations. This is precisely what tells us that these evaluative methods are getting more information from the voters than just the ranking. With Approval Voting, for instance, we don’t know how many of the A and B voters would approve C. If most of them did, then C would win. Otherwise, B would win. Which is exactly how it should be.


Of course, with any system there will be some elections that still end in ties and near-ties, but with an evaluative voting method such outcomes would indicate a truly torn electorate, not the illusion of a tie which we can end up with if we only have ranking information from voters.

Follow The Center for Election Science on: