Information elicitation mechanisms, such as Peer Prediction [Miller 2005] and Bayesian Truth Serum [Prelec 2004], are designed to reward agents for honestly reporting their private information, even when this information cannot be directly verified. Information elicitation mechanisms, such as these, are cleverly designed so that truth-telling is a strict Bayesian Nash Equilibrium. However, a key challenge that has arisen is that there are typically many other non-truthful equilibrium as well, and it is important that truth-telling not only be an equilibrium, but be paid more than other equilibrium so that agents do not coordinate on a non-informative equilibria. Several past works have overcome this challenge in various setting using clever techniques, but a new technique was required for each new setting.
Our main contribution in this paper is providing a framework for designing information elicitation mechanisms where truth-telling is the highest paid equilibrium, even when the mechanism does not know the common prior. We define information monotone functions which can measure the amount of "information" contained in agents' reports such that the function is greater in the truthful equilibrium than in non-truthful equilibria. We provide several interesting information monotone functions ( f -disagreement, f-mutual information, f -information gain) in different settings. Aided by these theoretical tools, we (1) extend Dasgupta and Ghosh's mechanism to the non-binary setting with an additional assumption that agents are asked to answer a large number of a priori similar questions; (2) reprove the main results of Prelec, Dasgupta and Ghosh and a weaker version of Kong and Schoenebeck in our framework. Our framework thus provides both new mechanisms and a deeper understanding of prior results.