Markets are one of the primary mechanisms for determining who gets what in our society. Their combination with computation and the Internet has revolutionary potential â€“ carefully designed, algorithmic markets can be facilitators of resource allocation that is...
Markets are one of the primary mechanisms for determining who gets what in our society. Their combination with computation and the Internet has revolutionary potential â€“ carefully designed, algorithmic markets can be facilitators of resource allocation that is unprecedentedly efficient, fair and profitable. This potential is already being realized in important applications such as efficient allocation of radio spectrum among telecommunication companies, monetization of online advertisements by companies like Google (funding free web search), and fair allocation of schools among students.
However, there turn out to be several barriers that stand in the way of maximizing social welfare by combining market mechanisms and computation. One issue is that not all problems are computationally tractable; but computer scientists have developed techniques to deal with intractability. For example, it may be the case that maximizing welfare is computationally hard, but approximately maximizing it may be tractable. Often, the approximation algorithm performs surprisingly well in practice and is surprisingly simple â€“ a typical example is â€œgreedyâ€ allocation of each resource in turn (independently of how the other remaining resources will be allocated).
Another issue is that not all allocation problems can be solved by the invisible hand of the market. In fact, some of the fundamental theory underlying the idea of free markets relies on an assumption of â€œno complementsâ€. This means that the goods on the market are assumed not to have synergies â€“ owning one good cannot increase the value of owning another good (peanut butter and jelly are a classic example of complements, since having peanut butter can increase the desirability of jam). Complements greatly complicate the allocation of goods and may lead to market failures. However the assumption of no complements is grossly unrealistic.
The main objective of this project is to apply the computer science approach for dealing with computational barriers to the realm of economic barriers, in particular, the existence of complements among goods. We aim to find whether wide-spread economic mechanisms (e.g., greedy allocation or standard ascending-price auctions) are robust to some degree of complementarity, and to design robust alternatives if not. Robustness is a key property in transferring theoretical insights to applicable and exploitable practices.
This year-long project resulted in 6 papers appearing in 1st-tier scientific venues. The results are of two kinds:
-Extending the mathematical and computational theory of complements
-Applications to the design of better computational markets
Both branches of results have been presented to interdisciplinary researchers (from computing, engineering, economics, management science and business disciplines) in multiple international forums, as well as to industry players like Microsoft and to public policy makers.
In the first branch, our research has uncovered intriguing mathematical structure. To give an idea of our results in a nutshell, consider a simple analogue of the complex valuation function that every market player has for owning different combinations of goods: a one-dimensional function f(x)=y. It is well known that if the function is concave (â€œhill shapedâ€), it is computationally easy to find its maximum, while for an arbitrary non-concave function, simple algorithms might get stuck at a suboptimal point. We are interested in the intermediate case, where the function is â€œapproximatelyâ€ concave â€“ this is the analogue of a market with limited complements (it is â€œapproximatelyâ€ complement-free). Adding noise to a concave function maintains approximate concavity, but for the mathematical objects we are dealing with â€“ high dimensional discrete functions â€“ establishing this is challenging, and our result was recognized by the community as â€œone of the best results in algorithms published in the last yearâ€. It may be a first step in establishing that noisy, real-life markets are approximately complement-free. However, we also find that many standard techniques like greedy or ascending auctions may fail miserably with even a small dose of complements â€“ not an acceptable risk e.g. in high-stake spectrum auctions. This motivates further research to design more robust market mechanisms.
In the second branch, on the applications side, we develop more robust market mechanisms for: (1) extracting revenue (the driving force behind the internet and sharing economies), and (2) fairness and social good. For revenue, our mechanisms are based on simple and practical methods rather than on complex, personalized and dynamic pricing methods. Namely, we show that the following methods are robust to a mild degree of complements and yield high revenue: (a) taking the best of selling goods separately and bundling them together; (b) enhancing competition for the goods (e.g., by investing in advertising). For social good and fairness, we analyze a method based on artificial currency: for example, students enrolling in over-demanded courses get â€œbudgetsâ€ depending on their year of study, academic performance, etc., and use these budgets to â€œshopâ€ for courses whose â€œpriceâ€ reflects their popularity. Besides being transparent, we show this method has nice fairness guarantees. It is applicable to many scenarios, from sharing a family heirloom to allocating cabinet ministries among political parties.
Our results as summarized above open a new horizon for theoretical research â€“ developing robust versions of well-studied allocation algorithms and mechanisms. Robustness to a degree of complementarity is especially important in light of the far-reaching social and economic consequences of such mechanisms.
Our study of whether adding noise to discrete functions with nice properties approximately preserves these properties is of interest beyond welfare maximization. It mirrors a broad literature for continuous functions known as â€œstabilityâ€, and has implications to other computational fields, most notably machine learning.
Our simple and robust methods for achieving high revenue show that complicated pricing â€“ notorious is its own right â€“ may not be advisable even for sophisticated sellers. While the methods we suggest are simple, their analysis was challenging and required duality-based, cutting-edge techniques.
For the objective of fair allocation of public resources, our results offer a refined approach to fairness, which takes into account the possibly very different entitlements across the population to such resources, while simultaneously promoting transparency and efficiency.
More info: http://www.cs.huji.ac.il/.