Concepts de base
Publishers often modify their documents to improve their ranking for multiple queries representing the same information need. This can lead to instability, as an equilibrium in the resulting ranking game does not necessarily exist.
Résumé
The paper presents a theoretical and empirical analysis of ranking-incentivized document modifications in a competitive retrieval setting where publishers aim to improve their documents' rankings for multiple queries.
Key highlights:
- Game theoretic analysis shows that an equilibrium in the multiple-queries setting does not necessarily exist, in contrast to the single-query setting.
- Empirical analysis of ranking competitions reveals that publishers tend to mimic content from previously highly ranked documents, similar to the single-query setting.
- The neural ranker used in the competitions led to more diverse rankings across queries representing the same topic, making it harder for publishers to improve their documents' rankings for multiple queries.
- Information from rankings for other queries can help predict which document among the non-winners will become the top-ranked document in the next round.
Stats
"If 𝑝≤1/𝑚, then the profile where all players write 𝑑= (𝑝, . . . , 𝑝) is a pure Nash equilibrium."
"If 𝑛= 2 and 𝑚> 𝑛, then the game 𝐺has a pure Nash equilibrium iff 𝑝≤1/(𝑚−1)."
"The game 𝐺= ⟨𝑛,𝑚, 𝑝⟩with 𝑛< 𝑚has a pure Nash equilibrium iff 𝑝≤1/⌈2·𝑚/(𝑛−1)⌉."
Citations
"Previous work on the competitive retrieval setting focused on a single-query setting: document authors manipulate their documents so as to improve their future ranking for a given query."
"We study a competitive setting where authors opt to improve their document's ranking for multiple queries."