SHUCHI CHAWLA
Professor
Department of Computer Science
The University of Texas at Austin
shuchi (AT) cs.utexas.edu
Shuchi Chawla
Hello! I am a professor of computer science at UT Austin. My research falls within the areas of theoretical computer science and economics and computation. I am interested in all kinds of algorithmic problems, but particularly enjoy working on problems that involve stochastic input, online decision-making, uncertainty and learning.

Some recent happenings
  • I'm co-organizing a semester on Data-Driven Decision Processes at the Simons Institute in Fall'22. We have an exciting slate of activities planned including a celebration of David Blackwell!
  • After spending 15 years in Madison, WI, I just moved from UW to UT-Austin. Excited for this new phase in my career!
  • The CATCS organized a TCS visioning workshop in Summer'20. The reports, which I co-edited, are now ready to share: Long report, Slides, Short report (a part of the CCC quadrennial series).
  • I recently got done co-chairing EC'21 with Federico Echenique and chairing SODA'20.
  • I feel honored to have been awarded the Provost's Mid Career Award and the Chancellor's Teaching Innovation Award for 2020 at UW-Madison.

Recent invited talks
  • Here is a video of my Richard M. Karp Distinguished Lecture at the Simons Institute in Berkeley in Oct'22.
  • Here is an invited talk I gave at SODA'22.
  • Here is a semi-plenary talk I gave at GAMES'20 (which actually took place in July'21).
  • Here is a survey talk I gave at the Dagstuhl scheduling workshop in Feb'20.
  • Here is a keynote talk I gave at WINE'19 in Dec'19.
  • Here is an invited talk on Online Resource Allocation at APPROX'19 in September'19.

Brief Biography
    Shuchi Chawla holds an Endowed Professorship in Computer Science at UT-Austin and is an Amazon Scholar. Shuchi is a theoretical computer scientist specializing in the areas of algorithm design, and economics and computation. Shuchi received a Ph.D. from Carnegie Mellon University and a B.Tech. from the Indian Institute of Technology, Delhi. Prior to joining UT-Austin, she spent 15 years as a professor of CS at the University of Wisconsin-Madison. She has also previously held visiting positions at the University of Washington and Microsoft Research. Shuchi is the recipient of an NSF Career award, a Sloan Foundation fellowship, and several awards for her research and teaching at UW-Madison. Shuchi recently served as the PC Chair of SODA'20 and EC'21, and currently serves on the editorial boards of the ACM Transactions on Algorithms and the ACM Transactions on Economics and Computation.

    Personal: Shuchi is married to CS professor Aditya Akella and they have two adorable kids.

Research

Interests: I am interested in all kinds of algorithmic problems, but particularly enjoy working on problems that involve stochastic input, online decision-making, uncertainty and learning. Much of my recent and current work lies within or is inspired by a subarea of economics called mechanism design, which involves designing systems and markets for selfish agents. I am also interested in algorithmic fairness, algorithmic issues in networks, and machine learning.

Sponsors: I am grateful for the generous support of the National Science Foundation, the Alfred P. Sloan Foundation, and Microsoft Research.


Research Group

Current advisees: Rojin Rezvan, Nathaniel Sauerberg, Dimitrios Christou, Kristin Sheridan, Trung Dang, Zhiyi Huang, Greg Kehne.

Graduate/postdoctoral alumni:

Undergraduate alumni: Huck Bennett (NYU), Boyan Li (CMU), Andrew Morgan (Wisconsin), Ruimin Zhang (U. Chicago)

Selected publications and preprints

(Click on each title to see abstracts and other info. For further information check out dblp, arxiv, or my Google scholar profile.)

2024
  • https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4561235

    Authors: Saeed Alaei, Shuchi Chawla, Ali Makhdoumi, Azarakhsh Malekian

    We consider a mechanism design setting with a single item and a single buyer who is uncertain about the value of the item. Both the buyer and the seller have a common model for the buyer's value, but the buyer discovers her true value only upon receiving the item. We show that mechanisms in this setting can be interpreted as randomized refund mechanisms, which allocate the item at some price and then offer a (partial and/or randomized) refund to the buyer in exchange for the item if the buyer is unsatisfied with her purchase. Motivated by their practical importance, we study the design of optimal deterministic mechanisms in this setting. We first characterize optimal mechanisms as virtual value maximizers for both continuous and discrete type settings. We then use this characterization, along with tools like regularity and duality, to develop efficient algorithms for finding optimal and near-optimal deterministic mechanisms.

  • https://arxiv.org/abs/2304.01958

    Authors: Shuchi Chawla, Dimitrios Christou

    In the Time-Windows TSP (TW-TSP) we are given requests at different locations on a network; each request is endowed with a reward and an interval of time; the goal is to find a tour that visits as much reward as possible during the corresponding time window. For the online version of this problem, where each request is revealed at the start of its time window, no finite competitive ratio can be obtained. We consider a version of the problem where the algorithm is presented with predictions of where and when the online requests will appear, without any knowledge of the quality of this side information. Vehicle routing problems such as the TW-TSP can be very sensitive to errors or changes in the input due to the hard time-window constraints, and it is unclear whether imperfect predictions can be used to obtain a finite competitive ratio. We show that good performance can be achieved by explicitly building slack into the solution. Our main result is an online algorithm that achieves a competitive ratio logarithmic in the diameter of the underlying network, matching the performance of the best offline algorithm to within factors that depend on the quality of the provided predictions. The competitive ratio degrades smoothly as a function of the quality and we show that this dependence is tight within constant factors.

  • https://arxiv.org/abs/2306.11604

    Authors: Shuchi Chawla, Kristin Sheridan

    We study the design of embeddings into Euclidean space with outliers. Given a metric space (X,d) and an integer k, the goal is to embed all but k points in X (called the outliers) into Euclidean space with the smallest possible distortion c. Finding the optimal distortion c for a given outlier set size k, or alternately the smallest k for a given target distortion c are both NP-hard problems. In fact, it is UGC-hard to approximate k to within a factor smaller than 2 even when the metric sans outliers is isometrically embeddable into Euclidean space. We consider bi-criteria approximations. Our main result is a polynomial time algorithm that approximates the outlier set size to within an O(log2 k) factor and the distortion to within a constant factor. The main technical component in our result is an approach for constructing a composition of two given embeddings from subsets of X into Euclidean space which inherits the distortions of each to within small multiplicative factors. Specifically, given a low cS distortion embedding from a subset S of X into ell2 and a high(er) cX distortion embedding from the entire set X into ell2, we construct a single embedding that achieves the same distortion cS over pairs of points in S and an expansion of at most O(log k) cX over the remaining pairs of points, where k is the number of outliers. Our composition theorem extends to embeddings into arbitrary ellp metrics for p >= 1, and may be of independent interest. While unions of embeddings over disjoint sets have been studied previously, to our knowledge, this is the first work to consider compositions of nested embeddings.

2023
  • https://pubsonline.informs.org/doi/full/10.1287/opre.2023.0031

    Authors: Shuchi Chawla, Nikhil Devanur, and Thodoris Lykouris

    We study a pricing problem where a seller has k identical copies of a product, buyers arrive sequentially, and the seller prices the items aiming to maximize social welfare. When k = 1, this is the so-called prophet inequality problem for which there is a simple pricing scheme achieving a competitive ratio of 1/2. On the other end of the spectrum, as k goes to infinity, the asymptotic performance of both static and adaptive pricing is well understood. We provide a static pricing scheme for the small-supply regime: where k is small but larger than one. Prior to our work, the best competitive ratio known for this setting was the 1/2 that follows from the single-unit prophet inequality. Our pricing scheme is easy to describe as well as practical; it is anonymous, nonadaptive, and order oblivious. We pick a single price that equalizes the expected fraction of items sold and the probability that the supply does not sell out before all customers are served; this price is then offered to each customer while supply lasts. This extends an approach introduced by Samuel-Cahn for the case of k = 1. This pricing scheme achieves a competitive ratio that increases gradually with the supply. Subsequent work shows that our pricing scheme is the optimal static pricing for every value of k.

  • https://arxiv.org/abs/2204.01962

    Authors: Shuchi Chawla, Rojin Rezvan, Yifeng Teng, and Christos Tzamos

    A recent line of research has established a novel desideratum for designing approximately-revenue-optimal multi-item mechanisms, namely the buy-many constraint. Under this constraint, prices for different allocations made by the mechanism must be subadditive implying that the price of a bundle cannot exceed the sum of prices of individual items it contains. This natural constraint has enabled several positive results in multi-item mechanism design bypassing well-established impossibility results. Our work addresses a main open question from this literature involving the design of buy-many mechanisms for multiple buyers. Our main result is that a simple sequential item pricing mechanism with buyer-specific prices can achieve an O(log m) approximation to the revenue of any buy-many mechanism when all buyers have unit-demand preferences over m items. This is the best possible as it directly matches the previous results for the single-buyer setting where no simple mechanism can obtain a better approximation. Our result applies in full generality: even though there are many alternative ways buy-many mechanisms can be defined for multi-buyer settings, our result captures all of them at the same time. We achieve this by directly competing with a more permissive upper-bound on the buy-many revenue, obtained via an ex-ante relaxation.

  • https://arxiv.org/abs/2108.12976

    Authors: Shuchi Chawla, Evangelia Gergatsouli, Jeremy McMahan, Christos Tzamos

    We revisit the classic Pandora's Box (PB) problem under correlated distributions on the box values. Recent work (Chawla et al., FOCS'20) obtained constant approximate algorithms for a restricted class of policies for the problem that visit boxes in a fixed order. In this work, we study the complexity of approximating the optimal policy which may adaptively choose which box to visit next based on the values seen so far. Our main result establishes an approximation-preserving equivalence of PB to the well studied Uniform Decision Tree (UDT) problem from stochastic optimization and a variant of the Min-Sum Set Cover (MSSCf) problem. For distributions of support m, UDT admits a log m approximation, and while a constant factor approximation in polynomial time is a long-standing open problem, constant factor approximations are achievable in subexponential time (arXiv:1906.11385). Our main result implies that the same properties hold for PB and MSSCf. We also study the case where the distribution over values is given more succinctly as a mixture of m product distributions. This problem is again related to a noisy variant of the Optimal Decision Tree which is significantly more challenging. We give a constant-factor approximation that runs in time n^O(m^2/epsilon^2) when the mixture components on every box are either identical or separated in TV distance by epsilon.

2022
  • https://arxiv.org/abs/2204.04136

    Authors: Shuchi Chawla, Rojin Rezvan, and Nathaniel Sauerberg

    We design fair sponsored search auctions that achieve a near-optimal tradeoff between fairness and quality. Our work builds upon the model and auction design of Chawla and Jagadeesan (CJ'22), who considered the special case of a single slot. We consider sponsored search settings with multiple slots and the standard model of click through rates that are multiplicatively separable into an advertiser-specific component and a slot-specific component. When similar users have similar advertiser-specific click through rates, our auctions achieve the same near-optimal tradeoff between fairness and quality as in CJ'22. When similar users can have different advertiser-specific preferences, we show that a preference-based fairness guarantee holds. Finally, we provide a computationally efficient algorithm for computing payments for our auctions as well as those in previous work, resolving another open direction from CJ'22.

  • https://arxiv.org/abs/2106.04704

    Authors: Shuchi Chawla, Rojin Rezvan, Yifeng Teng, and Christos Tzamos

    We study the revenue guarantees and approximability of item pricing. Recent work shows that with n heterogeneous items, item-pricing guarantees an O(log n) approximation to the optimal revenue achievable by any (buy-many) mechanism, even when buyers have arbitrarily combinatorial valuations. However, finding good item prices is challenging -- it is known that even under unit-demand valuations, it is NP-hard to find item prices that approximate the revenue of the optimal item pricing better than O(sqrt(n)). Our work provides a more fine-grained analysis of the revenue guarantees and computational complexity in terms of the number of item ``categories'' which may be significantly fewer than n. We assume the items are partitioned in k categories so that items within a category are totally-ordered and a buyer's value for a bundle depends only on the best item contained from every category. We show that item-pricing guarantees an O(log k) approximation to the optimal (buy-many) revenue and provide a PTAS for computing the optimal item-pricing when k is constant. We also provide a matching lower bound showing that the problem is (strongly) NP-hard even when k=1. Our results naturally extend to the case where items are only partially ordered, in which case the revenue guarantees and computational complexity depend on the width of the partial ordering, i.e. the largest set for which no two items are comparable.

  • https://arxiv.org/abs/2003.13966

    Authors: Shuchi Chawla, Meena Jagadeesan

    We study the tradeoff between social welfare maximization and fairness in the context of ad auctions. We study an ad auction setting where users arrive one at a time, k advertisers submit values for each user, and the auction assigns a distribution over ads to each user. Following the works of Dwork and Ilvento (2019) and Chawla et al. (2020), our goal is to design a truthful auction that satisfies "individual fairness" in its outcomes: informally speaking, users that are similar to each other should obtain similar allocations of ads.
    We express the fairness constraint as a kind of stability condition: any two users that are assigned multiplicatively similar values by all the advertisers must receive additively similar allocations for each advertiser. This value stability constraint is expressed as a function that maps the multiplicative distance between value vectors to the maximum allowable ℓ∞ distance between the corresponding allocations. Standard auctions do not satisfy this kind of value stability.
    Our main contribution is a new class of allocation algorithms called Inverse Proportional Allocation that achieve value stability with respect to an expressive class of stability conditions. These allocation algorithms are truthful and prior-free, and achieve a constant factor approximation to the optimal (unconstrained) social welfare. In particular, the approximation ratio is independent of the number of advertisers in the system. In this respect, these allocation algorithms greatly surpass the guarantees achieved in previous work. In fact, our algorithms achieve a near optimal tradeoff between fairness and social welfare under a mild assumption on the value stability constraint. We also extend our results to broader notions of fairness that we call subset fairness.

2021
  • https://arxiv.org/abs/2007.07990

    Authors: Shuchi Chawla, Nikhil Devanur, Thodoris Lykouris

    We study a pricing problem where a seller has k identical copies of a product, buyers arrive sequentially, and the seller prices the items aiming to maximize social welfare. When k=1, this is the so called prophet inequality problem for which there is a simple pricing scheme achieving a competitive ratio of 1/2. On the other end of the spectrum, as k goes to infinity, the asymptotic performance of both static and adaptive pricing is well understood.
    We provide a static pricing scheme for the small-supply regime: where k is small but larger than 1. Prior to our work, the best competitive ratio known for this setting was the 1/2 that follows from the single-unit prophet inequality. Our pricing scheme is easy to describe as well as practical -- it is anonymous, non-adaptive, and order-oblivious. We pick a single price that equalizes the expected fraction of items sold and the probability that the supply does not sell out before all customers are served; this price is then offered to each customer while supply lasts. This pricing scheme achieves a competitive ratio that increases gradually with the supply and approaches to 1 at the optimal rate. Astonishingly, for k<20, it even outperforms the state-of-the-art adaptive pricing for the small-k regime.

2020
  • https://arxiv.org/abs/1911.01632

    Authors: Shuchi Chawla, Evangelia Gergatsouli, Yifeng Teng, Christos Tzamos, Ruimin Zhang

    The Pandora's Box problem and its extensions capture optimization problems with stochastic input where the algorithm can obtain instantiations of input random variables at some cost. To our knowledge, all previous work on this class of problems assumes that different random variables in the input are distributed independently. As such it does not capture many real-world settings. In this paper, we provide the first approximation algorithms for Pandora's Box-type problems with correlations. We assume that the algorithm has access to samples drawn from the joint distribution on input.
    Algorithms for these problems must determine an order in which to probe random variables, as well as when to stop and return the best solution found so far. In general, an optimal algorithm may make both decisions adaptively based on instantiations observed previously. Such fully adaptive (FA) strategies cannot be efficiently approximated to within any sublinear factor with sample access. We therefore focus on the simpler objective of approximating partially adaptive (PA) strategies that probe random variables in a fixed predetermined order but decide when to stop based on the instantiations observed. We consider a number of different feasibility constraints and provide simple PA strategies that are approximately optimal with respect to the best PA strategy for each case. All of our algorithms have polynomial sample complexity. We further show that our results are tight within constant factors: better factors cannot be achieved even using the full power of FA strategies.

  • https://arxiv.org/abs/2003.10636

    Authors: Shuchi Chawla, Yifeng Teng, Christos Tzamos

    We study the multi-item mechanism design problem where a monopolist sells n heterogeneous items to a single buyer. We focus on buy-many mechanisms, a natural class of mechanisms frequently used in practice. The buy-many property allows the buyer to interact with the mechanism multiple times instead of once as in the more commonly studied buy-one mechanisms. This imposes additional incentive constraints and thus restricts the space of mechanisms that the seller can use.
    In this paper, we explore the qualitative differences between buy-one and buy-many mechanisms focusing on two important properties: revenue continuity and menu-size complexity.
    Our first main result is that when the value function of the buyer is perturbed multiplicatively by a factor of 1±ϵ, the optimal revenue obtained by buy-many mechanisms only changes by a factor of 1±poly(n,ϵ). In contrast, for buy-one mechanisms, the revenue of the resulting optimal mechanism after such a perturbation can change arbitrarily.
    Our second main result shows that under any distribution of arbitrary valuations, finite menu size suffices to achieve a (1−ϵ)-approximation to the optimal buy-many mechanism. We give tight upper and lower bounds on the number of menu entries as a function of n and ϵ. On the other hand, such a result fails to hold for buy-one mechanisms as even for two items and a buyer with either unit-demand or additive valuations, the menu-size complexity of approximately optimal mechanisms is unbounded.

  • https://arxiv.org/abs/1907.01484

    Authors: Kshiteej Mahajan, Arjun Balasubramanian, Arjun Singhvi, Shivaram Venkataraman, Aditya Akella, Amar Phanishayee, Shuchi Chawla

    Modern distributed machine learning (ML) training workloads benefit significantly from leveraging GPUs. However, significant contention ensues when multiple such workloads are run atop a shared cluster of GPUs. A key question is how to fairly apportion GPUs across workloads. We find that established cluster scheduling disciplines are a poor fit because of ML workloads' unique attributes: ML jobs have long-running tasks that need to be gang-scheduled, and their performance is sensitive to tasks' relative placement.
    We propose Themis, a new scheduling framework for ML training workloads. It's GPU allocation policy enforces that ML workloads complete in a finish-time fair manner, a new notion we introduce. To capture placement sensitivity and ensure efficiency, Themis uses a two-level scheduling architecture where ML workloads bid on available resources that are offered in an auction run by a central arbiter. Our auction design allocates GPUs to winning bids by trading off efficiency for fairness in the short term but ensuring finish-time fairness in the long term. Our evaluation on a production trace shows that Themis can improve fairness by more than 2.25X and is ~5% to 250% more cluster efficient in comparison to state-of-the-art schedulers.

  • https://arxiv.org/abs/1906.08732

    Authors: Shuchi Chawla, Christina Ilvento, Meena Jagadeesan

    Fairness in advertising is a topic of particular concern motivated by theoretical and empirical observations in both the computer science and economics literature. We examine the problem of fairness in advertising for general purpose platforms that service advertisers from many different categories. First, we propose inter-category and intra-category fairness desiderata that take inspiration from individual fairness and envy-freeness. Second, we investigate the "platform utility" (a proxy for the quality of the allocation) achievable by mechanisms satisfying these desiderata. More specifically, we compare the utility of fair mechanisms against the unfair optimal, and we show by construction that our fairness desiderata are compatible with utility. That is, we construct a family of fair mechanisms with high utility that perform close to optimally within a class of fair mechanisms. Our mechanisms also enjoy nice implementation properties including metric-obliviousness, which allows the platform to produce fair allocations without needing to know the specifics of the fairness requirements.

  • https://arxiv.org/abs/1909.00845

    Authors: Shuchi Chawla, Shaleen Deep, Paraschos Koutris, Yifeng Teng

    Buying and selling of data online has increased substantially over the last few years. Several frameworks have already been proposed that study query pricing in theory and practice. The key guiding principle in these works is the notion of arbitrage-freeness where the broker can set different prices for different queries made to the dataset, but must ensure that the pricing function does not provide the buyers with opportunities for arbitrage. However, little is known about revenue maximization aspect of query pricing. In this paper, we study the problem faced by a broker selling access to data with the goal of maximizing her revenue. We show that this problem can be formulated as a revenue maximization problem with single-minded buyers and unlimited supply, for which several approximation algorithms are known. We perform an extensive empirical evaluation of the performance of several pricing algorithms for the query pricing problem on real-world instances. In addition to previously known approximation algorithms, we propose several new heuristics and analyze them both theoretically and experimentally. Our experiments show that algorithms with the best theoretical bounds are not necessarily the best empirically. We identify algorithms and heuristics that are both fast and also provide consistently good performance when valuations are drawn from a variety of distributions.

2019
  • https://arxiv.org/abs/1902.10315

    Authors: Shuchi Chawla, Yifeng Teng, Christos Tzamos

    Multi-item mechanisms can be very complex offering many different bundles to the buyer that could even be randomized. Such complexity is thought to be necessary as the revenue gaps between randomized and deterministic mechanisms, or deterministic and simple mechanisms are huge even for additive valuations.
    We challenge this conventional belief by showing that these large gaps can only happen in restricted situations. These are situations where the mechanism overcharges a buyer for a bundle while selling individual items at much lower prices. Arguably this is impractical in many settings because the buyer can break his order into smaller pieces paying a much lower price overall. Our main result is that if the buyer is allowed to purchase as many (randomized) bundles as he pleases, the revenue of any multi-item mechanism is at most O(logn) times the revenue achievable by item pricing, where n is the number of items. This holds in the most general setting possible, with an arbitrarily correlated distribution of buyer types and arbitrary valuations.
    We also show that this result is tight in a very strong sense. Any family of mechanisms of subexponential description complexity cannot achieve better than logarithmic approximation even against the best deterministic mechanism and even for additive valuations. In contrast, item pricing that has linear description complexity matches this bound against randomized mechanisms.

  • https://arxiv.org/abs/1708.00043

    Authors: Shuchi Chawla, J. Benjamin Miller, Yifeng Teng

    We present pricing mechanisms for several online resource allocation problems which obtain tight or nearly tight approximations to social welfare. In our settings, buyers arrive online and purchase bundles of items; buyers' values for the bundles are drawn from known distributions. This problem is closely related to the so-called prophet-inequality of Krengel and Sucheston and its extensions in recent literature. Motivated by applications to cloud economics, we consider two kinds of buyer preferences. In the first, items correspond to different units of time at which a resource is available; the items are arranged in a total order and buyers desire intervals of items. The second corresponds to bandwidth allocation over a tree network; the items are edges in the network and buyers desire paths.
    Because buyers' preferences have complementarities in the settings we consider, recent constant-factor approximations via item prices do not apply, and indeed strong negative results are known. We develop static, anonymous bundle pricing mechanisms.
    For the interval preferences setting, we show that static, anonymous bundle pricings achieve a sublogarithmic competitive ratio, which is optimal (within constant factors) over the class of all online allocation algorithms, truthful or not. For the path preferences setting, we obtain a nearly-tight logarithmic competitive ratio. Both of these results exhibit an exponential improvement over item pricings for these settings. Our results extend to settings where the seller has multiple copies of each item, with the competitive ratio decreasing linearly with supply. Such a gradual tradeoff between supply and the competitive ratio for welfare was previously known only for the single item prophet inequality.

2018
  • https://www.usenix.org/biblio-2264

    Authors: Kshiteej Mahajan, Mosharaf Chowdhury, Aditya Akella, Shuchi Chawla

    Modern data processing clusters are highly dynamic -- both in terms of the number of concurrently running jobs and their resource usage. To improve job performance, recent works have focused on optimizing the cluster scheduler and the jobs' query planner with a focus on picking the right query execution plan (QEP) -- represented as a directed acyclic graph -- for a job in a resource-aware manner, and scheduling jobs in a QEP-aware manner. However, because existing solutions use a fixed QEP throughout the entire execution, the inability to adapt a QEP in reaction to resource changes often leads to large performance inefficiencies.
    This paper argues for dynamic query re-planning, wherein we re-evaluate and re-plan a job's QEP during its execution. We show that designing for re-planning requires fundamental changes to the interfaces between key layers of data analytics stacks today, i.e., the query planner, the execution engine, and the cluster scheduler. Instead of pushing more complexity into the scheduler or the query planner, we argue for a redistribution of responsibilities between the three components to simplify their designs. Under this redesign, we analytically show that a greedy algorithm for re-planning and execution alongside a simple max-min fair scheduler can offer provably competitive behavior even under adversarial resource changes. We prototype our algorithms atop Apache Hive and Tez. Via extensive experiments, we show that our design can offer a median performance improvement of 1.47x compared to state-of-the-art alternatives.

  • http://arxiv.org/abs/1611.07745

    Authors: Shuchi Chawla, Joseph (Seffi) Naor, Debmalya Panigrahi, Mohit Singh, Seeun William Umboh

    A central question in algorithmic game theory is to measure the inefficiency (ratio of costs) of Nash equilibria (NE) with respect to socially optimal solutions. The two established metrics used for this purpose are price of anarchy (POA) and price of stability (POS), which respectively provide upper and lower bounds on this ratio. A deficiency of these metrics, however, is that they are purely existential and shed no light on which of the equilibrium states are reachable in an actual game, i.e., via natural game dynamics. This is particularly striking if these metrics differ significantly, such as in network design games where the exponential gap between the best and worst NE states originally prompted the notion of POS in game theory (Anshelevich et al., FOCS 2002). In this paper, we make progress toward bridging this gap by studying network design games under natural game dynamics.
    First we show that in a completely decentralized setting, where agents arrive, depart, and make improving moves in an arbitrary order, the inefficiency of NE attained can be polynomially large. This implies that the game designer must have some control over the interleaving of these events in order to force the game to attain efficient NE. We complement our negative result by showing that if the game designer is allowed to execute a sequence of improving moves to create an equilibrium state after every batch of agent arrivals or departures, then the resulting equilibrium states attained by the game are exponentially more efficient, i.e., the ratio of costs compared to the optimum is only logarithmic. Overall, our two results establish that in network games, the efficiency of equilibrium states is dictated by whether agents are allowed to join or leave the game in arbitrary states, an observation that might be useful in analyzing the dynamics of other classes of games with divergent POS and POA bounds.

  • https://arxiv.org/abs/1703.08607

    Authors: Shuchi Chawla, Kira Goldner, J. Benjamin Miller, Emmanouil Pountourakis

    Most work in mechanism design assumes that buyers are risk neutral; some considers risk aversion arising due to a non-linear utility for money. Yet behavioral studies have established that real agents exhibit risk attitudes which cannot be captured by any expected utility model. We initiate the study of revenue-optimal mechanisms under buyer behavioral models beyond expected utility theory. We adopt a model from prospect theory which arose to explain these discrepancies and incorporates agents under-weighting uncertain outcomes. In our model, an event occurring with probability x<1 is worth strictly less to the agent than x times the value of the event when it occurs with certainty.
    In contrast to the risk-neutral setting, the optimal mechanism may be randomized and appears challenging to find, even for a single buyer and a single item for sale. Nevertheless, we give a characterization of the optimal mechanism which enables positive approximation results. In particular, we show that under a reasonable bounded-risk-aversion assumption, posted pricing obtains a constant approximation. Notably, this result is "risk-robust" in that it does not depend on the details of the buyer's risk attitude. Finally, we examine a dynamic setting in which the buyer is uncertain about his future value. In contrast to positive results for a risk-neutral buyer, we show that the buyer's risk aversion may prevent the seller from approximating the optimal revenue in a risk-robust manner.

  • https://arxiv.org/abs/1708.04699

    Authors: Shuchi Chawla, Jason D. Hartline, Denis Nekipelov

    This paper develops the theory of mechanism redesign by which an auctioneer can reoptimize an auction based on bid data collected from previous iterations of the auction on bidders from the same market. We give a direct method for estimation of the revenue of a counterfactual auction from the bids in the current auction. The estimator is a simple weighted order statistic of the bids and has the optimal error rate. Two applications of our estimator are A/B testing (a.k.a., randomized controlled trials) and instrumented optimization (i.e., revenue optimization subject to being able to do accurate inference of any counterfactual auction revenue).

2017
  • https://arxiv.org/abs/1703.00484

    Authors: Shuchi Chawla, Nikhil Devanur, Janardhan Kulkarni, Rad Niazadeh

    We consider a scheduling problem where a cloud service provider has multiple units of a resource available over time. Selfish clients submit jobs, each with an arrival time, deadline, length, and value. The service provider's goal is to implement a truthful online mechanism for scheduling jobs so as to maximize the social welfare of the schedule. Recent work shows that under a stochastic assumption on job arrivals, there is a single-parameter family of mechanisms that achieves near-optimal social welfare. We show that given any such family of near-optimal online mechanisms, there exists an online mechanism that in the worst case performs nearly as well as the best of the given mechanisms. Our mechanism is truthful whenever the mechanisms in the given family are truthful and prompt, and achieves optimal (within constant factors) regret.
    We model the problem of competing against a family of online scheduling mechanisms as one of learning from expert advice. A primary challenge is that any scheduling decisions we make affect not only the payoff at the current step, but also the resource availability and payoffs in future steps. Furthermore, switching from one algorithm (a.k.a. expert) to another in an online fashion is challenging both because it requires synchronization with the state of the latter algorithm as well as because it affects the incentive structure of the algorithms. We further show how to adapt our algorithm to a non-clairvoyant setting where job lengths are unknown until jobs are run to completion. Once again, in this setting, we obtain truthfulness along with asymptotically optimal regret (within poly-logarithmic factors).

  • https://www.microsoft.com/en-us/research/wp-content/uploads/2016/12/cloud.pdf

    Authors: Shuchi Chawla, Nikhil R. Devanur, Alexander E. Holroyd, Anna R. Karlin, James Martin, Balasubramanian Sivan

    We consider time-of-use pricing as a technique for matching supply and demand of temporal resources with the goal of maximizing social welfare. Relevant examples include energy, computing resources on a cloud computing platform, and charging stations for electric vehicles, among many others. A client/job in this setting has a window of time during which he needs service, and a particular value for obtaining it. We assume a stochastic model for demand, where each job materializes with some probability via an independent Bernoulli trial. Given a per-time-unit pricing of resources, any realized job will first try to get served by the cheapest available resource in its window and, failing that, will try to find service at the next cheapest available resource, and so on. Thus, the natural stochastic fluctuations in demand have the potential to lead to cascading overload events. Our main result shows that setting prices so as to optimally handle the expected demand works well: with high probability, when the actual demand is instantiated, the system is stable and the expected value of the jobs served is very close to that of the optimal offline algorithm.

2016
  • http://arxiv.org/abs/1603.03806

    Authors: Shuchi Chawla, J. Benjamin Miller

    We consider the problem of maximizing revenue for a monopolist offering multiple items to multiple heterogeneous buyers. We develop a simple mechanism that obtains a constant factor approximation under the assumption that the buyers' values are additive subject to a feasibility constraint and independent across items. Importantly, different buyers in our setting can have different constraints on the sets of items they desire. Our mechanism is a sequential variant of two-part tariffs. Prior to our work, simple approximation mechanisms for such multi-buyer problems were known only for the special cases of all unit-demand or all additive value buyers.
    Our work expands upon and unifies long lines of work on unit-demand settings and additive settings. We employ the ex ante relaxation approach developed by Alaei (2011) for reducing a multiple-buyer mechanism design problem with an ex post supply constraint into single-buyer ones with ex ante supply constraints. Solving the single-agent problems requires us to significantly extend techniques developed in the context of additive values by Li and Yao (2013) and their extension to subadditive values by Rubinstein and Weinberg (2015).

  • https://arxiv.org/abs/1606.00908

    Authors: Shuchi Chawla, Jason D. Hartline, Denis Nekipelov

    For many application areas A/B testing, which partitions users of a system into an A (control) and B (treatment) group to experiment between several application designs, enables Internet companies to optimize their services to the behavioral patterns of their users. Unfortunately, the A/B testing framework cannot be applied in a straightforward manner to applications like auctions where the users (a.k.a., bidders) submit bids before the partitioning into the A and B groups is made. This paper combines auction theoretic modeling with the A/B testing framework to develop methodology for A/B testing auctions. The accuracy of our method %, assuming the auction is directly comparable to ideal A/B testing where there is no interference between A and B. Our results are based on an extension and improved analysis of the inference method of Chawla et al. (2014).

  • http://pages.cs.wisc.edu/~shuchi/papers/PPP.pdf

    Authors: Shuchi Chawla, Nikhil R. Devanur, Anna R. Karlin, Balasubramanian Sivan

    We consider a pricing problem where a buyer is interested in purchasing/using a good, such as an app or music or software, repeatedly over time. The consumer discovers his value for the good only as he uses it, and the value evolves with each use. Optimizing for the seller's revenue in such dynamic settings is a complex problem and requires assumptions about how the buyer behaves before learning his future value(s), and in particular, how he reacts to risk. We explore the performance of a class of pricing mechanisms that are extremely simple for both the buyer and the seller to use: the buyer reacts to prices myopically without worrying about how his value evolves in the future; the seller needs to optimize for revenue over a space of only two parameters, and can do so without knowing the buyer's risk profile or fine details of the value evolution process. We present simple-versus-optimal type results, namely that under certain assumptions, simple pricing mechanisms of the above form are approximately optimal regardless of the buyer's risk profile.
    Our results assume that the buyer's value per usage evolves as a martingale. For our main result, we consider pricing mechanisms in which the seller offers the product for free for a certain number of uses, and then charges an appropriate fixed price per usage. We assume that the buyer responds by buying the product for as long as his value exceeds the fixed price. Importantly, the buyer does not need to know anything about how his future value will evolve, only how much he wants to use the product right now. Regardless of the buyers' initial value, our pricing captures as revenue a constant fraction of the total value that the buyers accumulate in expectation over time.

2015
  • http://arxiv.org/abs/1412.0681

    Authors: Shuchi Chawla, Konstantin Makarychev, Tselil Schramm, Grigory Yaroslavtsev

    We give new rounding schemes for the standard linear programming relaxation of the correlation clustering problem, achieving approximation factors almost matching the integrality gaps:
    - For complete graphs our appoximation is 2.06−ε for a fixed constant ε, which almost matches the previously known integrality gap of 2.
    - For complete k-partite graphs our approximation is 3. We also show a matching integrality gap.
    - For complete graphs with edge weights satisfying triangle inequalities and probability constraints, our approximation is 1.5, and we show an integrality gap of 1.2.
    Our results improve a long line of work on approximation algorithms for correlation clustering in complete graphs, previously culminating in a ratio of 2.5 for the complete case by Ailon, Charikar and Newman (JACM'08). In the weighted complete case satisfying triangle inequalities and probability constraints, the same authors give a 2-approximation; for the bipartite case, Ailon, Avigdor-Elgrabli, Liberty and van Zuylen give a 4-approximation (SICOMP'12).

2014
  • http://www.sigecom.org/exchanges/volume_13/1/CHAWLA.pdf

    Authors: Shuch Chawla, Balasubramanian SIVAN

    This article surveys recent work with an algorithmic flavor in Bayesian mechanism design. Bayesian mechanism design involves optimization in economic settings where the designer possesses some stochastic information about the input. Recent years have witnessed huge advances in our knowledge and understanding of algorithmic techniques for Bayesian mechanism design problems. These include, for example, revenue maximization in settings where buyers have multi-dimensional preferences, optimization of non-linear objectives such as makespan, and generic reductions from mechanism design to algorithm design. However, a number of tantalizing questions remain un-solved. This article is meant to serve as an introduction to Bayesian mechanism design for a novice, as well as a starting point for a broader literature search for an experienced researcher.

  • http://arxiv.org/abs/1304.3868

    Authors: Siddharth Barman, Shuchi Chawla, Seeun Umboh

    We study network design with a cost structure motivated by redundancy in data traffic. We are given a graph, g groups of terminals, and a universe of data packets. Each group of terminals desires a subset of the packets from its respective source. The cost of routing traffic on any edge in the network is proportional to the total size of the distinct packets that the edge carries. Our goal is to find a minimum cost routing. We focus on two settings. In the first, the collection of packet sets desired by source-sink pairs is laminar. For this setting, we present a primal-dual based 2-approximation, improving upon a logarithmic approximation due to Barman and Chawla (2012). In the second setting, packet sets can have non-trivial intersection. We focus on the case where each packet is desired by either a single terminal group or by all of the groups, and the graph is unweighted. For this setting we present an O(log g)-approximation. Our approximation for the second setting is based on a novel spanner-type construction in unweighted graphs that, given a collection of g vertex subsets, finds a subgraph of cost only a constant factor more than the minimum spanning tree of the graph, such that every subset in the collection has a Steiner tree in the subgraph of cost at most O(log g) that of its minimum Steiner tree in the original graph. We call such a subgraph a group spanner.

  • http://arxiv.org/abs/1404.5971

    Authors: Shuchi Chawla, Jason Hartline, Denis Nekipelov

    Good economic mechanisms depend on the preferences of participants in the mechanism. For example, the revenue-optimal auction for selling an item is parameterized by a reserve price, and the appropriate reserve price depends on how much the bidders are willing to pay. A mechanism designer can potentially learn about the participants' preferences by observing historical data from the mechanism; the designer could then update the mechanism in response to learned preferences to improve its performance. The challenge of such an approach is that the data corresponds to the actions of the participants and not their preferences. Preferences can potentially be inferred from actions but the degree of inference possible depends on the mechanism. In the optimal auction example, it is impossible to learn anything about preferences of bidders who are not willing to pay the reserve price. These bidders will not cast bids in the auction and, from historical bid data, the auctioneer could never learn that lowering the reserve price would give a higher revenue (even if it would). To address this impossibility, the auctioneer could sacrifice revenue optimality in the initial auction to obtain better inference properties so that the auction's parameters can be adapted to changing preferences in the future. This paper develops the theory for optimal mechanism design subject to good inferability.

  • http://arxiv.org/abs/1408.4424

    Authors: Shuchi Chawla, Hu Fu, Anna Karlin

    We study revenue maximization in settings where agents' values are interdependent: each agent receives a signal drawn from a correlated distribution and agents' values are functions of all of the signals. We introduce a variant of the generalized VCG auction with reserve prices and random admission, and show that this auction gives a constant approximation to the optimal expected revenue in matroid environments. Our results do not require any assumptions on the signal distributions, however, they require the value functions to satisfy a standard single-crossing property and a concavity-type condition.

2013
  • http://arxiv.org/abs/1305.0597

    Authors: Shuchi Chawla, Jason D. Hartline, David Malec, Balasubramanian Sivan

    We study the makespan minimization problem with unrelated selfish machines under the assumption that job sizes are stochastic. We design simple truthful mechanisms that under various distributional assumptions provide constant and sublogarithmic approximations to expected makespan. Our mechanisms are prior-independent in that they do not rely on knowledge of the job size distributions. Prior-independent approximation mechanisms have been previously studied for the objective of revenue maximization [Dhangwatnotai, Roughgarden and Yan'10, Devanur, Hartline, Karlin and Nguyen'11, Roughgarden, Talgam-Cohen and Yan'12]. In contrast to our results, in prior-free settings no truthful anonymous deterministic mechanism for the makespan objective can provide a sublinear approximation [Ashlagi, Dobzinski and Lavi'09].

  • http://dl.acm.org/citation.cfm?id=2483188

    Authors: Shuchi Chawla, Jason D. Hartline

    We study Bayes-Nash equilibria in a large class of anonymous order-based auctions. These include the generalized first-price auction for allocating positions to bidders, e.g., for sponsored search. We show that when bidders' values are independent and identically distributed the symmetric equilibrium is unique and efficient. Importantly, our proof is simple and structurally revealing. This uniqueness result for the generalized first-price auction is in stark contrast to the generalized second-price auction where there may be no efficient equilibrium. This result suggests, e.g., that first-price payment semantics may have advantages over second-price payment semantics. Our results extend also to certain models of risk aversion.

2012
  • http://arxiv.org/abs/1109.2067

    Authors: Shuchi Chawla, Nicole Immorlica, Brendan Lucier

    We consider the problem of converting an arbitrary approximation algorithm for a single-parameter optimization problem into a computationally efficient truthful mechanism. We ask for reductions that are black-box, meaning that they require only oracle access to the given algorithm and in particular do not require explicit knowledge of the problem constraints. Such a reduction is known to be possible, for example, for the social welfare objective when the goal is to achieve Bayesian truthfulness and preserve social welfare in expectation. We show that a black-box reduction for the social welfare objective is not possible if the resulting mechanism is required to be truthful in expectation and to preserve the worst-case approximation ratio of the algorithm to within a subpolynomial factor. Further, we prove that for other objectives such as makespan, no black-box reduction is possible even if we only require Bayesian truthfulness and an average-case performance guarantee.

  • http://arxiv.org/abs/1204.5823

    Authors: Siddharth Barman, Shuchi Chawla, Seeun Umboh

    In the reordering buffer problem (RBP), a server is asked to process a sequence of requests lying in a metric space. To process a request the server must move to the corresponding point in the metric. The requests can be processed slightly out of order; in particular, the server has a buffer of capacity k which can store up to k requests as it reads in the sequence. The goal is to reorder the requests in such a manner that the buffer constraint is satisfied and the total travel cost of the server is minimized. The RBP arises in many applications that require scheduling with a limited buffer capacity, such as scheduling a disk arm in storage systems, switching colors in paint shops of a car manufacturing plant, and rendering 3D images in computer graphics. We study the offline version of RBP and develop bicriteria approximations. When the underlying metric is a tree, we obtain a solution of cost no more than 9OPT using a buffer of capacity 4k + 1 where OPT is the cost of an optimal solution with buffer capacity k. Constant factor approximations were known previously only for the uniform metric (Avigdor-Elgrabli et al., 2012). Via randomized tree embeddings, this implies an O(log n) approximation to cost and O(1) approximation to buffer size for general metrics. Previously the best known algorithm for arbitrary metrics by Englert et al. (2007) provided an O(log^2 k log n) approximation without violating the buffer constraint.

  • http://arxiv.org/abs/1112.1136

    Authors: Siddharth Barman, Seeun Umboh, Shuchi Chawla, David Malec

    We consider online resource allocation problems where given a set of requests our goal is to select a subset that maximizes a value minus cost type of objective function. Requests are presented online in random order, and each request possesses an adversarial value and an adversarial size. The online algorithm must make an irrevocable accept/reject decision as soon as it sees each request. The "profit" of a set of accepted requests is its total value minus a convex cost function of its total size. This problem falls within the framework of secretary problems. Unlike previous work in that area, one of the main challenges we face is that the objective function can be positive or negative and we must guard against accepting requests that look good early on but cause the solution to have an arbitrarily large cost as more requests are accepted. This requires designing new techniques. We study this problem under various feasibility constraints and present online algorithms with competitive ratios only a constant factor worse than those known in the absence of costs for the same feasibility constraints. We also consider a multi-dimensional version of the problem that generalizes multi-dimensional knapsack within a secretary framework. In the absence of any feasibility constraints, we present an O(l) competitive algorithm where l is the number of dimensions; this matches within constant factors the best known ratio for multi-dimensional knapsack secretary.

  • http://arxiv.org/abs/1110.4150

    Authors: Siddharth Barman, Shuchi Chawla

    We consider network design problems for information networks where routers can replicate data but cannot alter it. This functionality allows the network to eliminate data-redundancy in traffic, thereby saving on routing costs. We consider two problems within this framework and design approximation algorithms. The first problem we study is the traffic-redundancy aware network design (RAND) problem. We are given a weighted graph over a single server and many clients. The server owns a number of different data packets and each client desires a subset of the packets; the client demand sets form a laminar set system. Our goal is to connect every client to the source via a single path, such that the collective cost of the resulting network is minimized. Here the transportation cost over an edge is its weight times times the number of distinct packets that it carries. The second problem is a facility location problem that we call RAFL. Here the goal is to find an assignment from clients to facilities such that the total cost of routing packets from the facilities to clients (along unshared paths), plus the total cost of "producing" one copy of each desired packet at each facility is minimized. We present a constant factor approximation for the RAFL and an O(log P) approximation for RAND, where P is the total number of distinct packets. We remark that P is always at most the number of different demand sets desired or the number of clients, and is generally much smaller.

  • http://arxiv.org/abs/1111.2893

    Authors: Shuchi Chawla, Jason D. Hartline, Balasubramanian Sivan

    We study the design and approximation of optimal crowdsourcing contests. Crowdsourcing contests can be modeled as all-pay auctions because entrants must exert effort up-front to enter. Unlike all-pay auctions where a usual design objective would be to maximize revenue, in crowdsourcing contests, the principal only benefits from the submission with the highest quality. We give a theory for optimal crowdsourcing contests that mirrors the theory of optimal auction design: the optimal crowdsourcing contest is a virtual valuation optimizer (the virtual valuation function depends on the distribution of contestant skills and the number of contestants). We also compare crowdsourcing contests with more conventional means of procurement. In this comparison, crowdsourcing contests are relatively disadvantaged because the effort of losing contestants is wasted. Nonetheless, we show that crowdsourcing contests are 2-approximations to conventional methods for a large family of "regular" distributions, and 4-approximations, otherwise.

2011
  • http://arxiv.org/abs/1103.6280

    Authors: Shuchi Chawla, David Malec, Azarakhsh Malekian

    We study Bayesian mechanism design problems in settings where agents have budgets. Specifically, an agent's utility for an outcome is given by his value for the outcome minus any payment he makes to the mechanism, as long as the payment is below his budget, and is negative infinity otherwise. This discontinuity in the utility function presents a significant challenge in the design of good mechanisms, and classical "unconstrained" mechanisms fail to work in settings with budgets. The goal of this paper is to develop general reductions from budget-constrained Bayesian MD to unconstrained Bayesian MD with small loss in performance. We consider this question in the context of the two most well-studied objectives in mechanism design---social welfare and revenue---and present constant factor approximations in a number of settings. Some of our results extend to settings where budgets are private and agents need to be incentivized to reveal them truthfully.

  • http://dl.acm.org/citation.cfm?id=1993798

    Authors: Aditya Akella, Shuchi Chawla, Holly Esquivel, Chitra Muthukrishnan

    We present the S4R supplemental routing system to address the constraints BGP places on ISPs and stub network alike. Technical soundness and economic viability are equal first class design requirements for S4R. In S4R, ISPs announce links connecting different parts of the Internet. ISPs can selfishly price their links to attract maximal amount of traffic. Stub networks can selfishly select paths that best meet their requirements at the lowest cost. We design a variety of practical algorithms for ISP and stub network response that strike a balance between accommodating selfishness of all participants and ensuring efficient and stable operation overall. We employ large scale simulations over realistic scenarios to show that S4R operates at a close-to-optimal state and that it encourages broad participation from stubs and ISPs.

2010
  • http://arxiv.org/abs/1002.3893

    Authors: Shuchi Chawla, David Malec, Balasubramanian Sivan

    We investigate the power of randomness in the context of a fundamental Bayesian optimal mechanism design problem--a single seller aims to maximize expected revenue by allocating multiple kinds of resources to "unit-demand" agents with preferences drawn from a known distribution. When the agents' preferences are single-dimensional Myerson's seminal work [Myerson '81] shows that randomness offers no benefit--the optimal mechanism is always deterministic. In the multi-dimensional case, where each agent's preferences are given by different values for each of the available services, Briest et al. [Briest, Chawla, Kleinberg, and Weinberg '10] recently showed that the gap between the expected revenue obtained by an optimal randomized mechanism and an optimal deterministic mechanism can be unbounded even when a single agent is offered only 4 services. However, this large gap is attained through unnatural instances where values of the agent for different services are correlated in a specific way. We show that when the agent's values involve no correlation or a specific kind of positive correlation, the benefit of randomness is only a small constant factor (4 and 8 respectively). Our model of positively correlated values (that we call additive values) is a natural model for unit-demand agents and items that are substitutes. Our results extend to multiple agent settings as well.

  • http://arxiv.org/abs/0907.2435

    Authors: Shuchi Chawla, Jason Hartline, David Malec, Balasubramanian Sivan

    We consider the classical mathematical economics problem of Bayesian optimal mechanism design where a principal aims to optimize expected revenue when allocating resources to self-interested agents with preferences drawn from a known distribution. In single-parameter settings (i.e., where each agent's preference is given by a single private value for being served and zero for not being served) this problem is solved [Myerson '81]. Unfortunately, these single parameter optimal mechanisms are impractical and rarely employed [Ausubel and Milgrom '06], and furthermore the underlying economic theory fails to generalize to the important, relevant, and unsolved multi-dimensional setting (i.e., where each agent's preference is given by multiple values for each of the multiple services available) [Manelli and Vincent '07]. In contrast to the theory of optimal mechanisms we develop a theory of sequential posted price mechanisms, where agents in sequence are offered take-it-or-leave-it prices. These mechanisms are approximately optimal in single-dimensional settings, and avoid many of the properties that make optimal mechanisms impractical. Furthermore, these mechanisms generalize naturally to give the first known approximations to the elusive optimal multi-dimensional mechanism design problem. In particular, we solve multi-dimensional multi-unit auction problems and generalizations to matroid feasibility constraints. The constant approximations we obtain range from 1.5 to 8. For all but one case, our posted price sequences can be computed in polynomial time.

  • http://arxiv.org/abs/0904.2400

    Authors: Patrick Briest, Shuchi Chawla, Robert Kleinberg, S. Matthew Weinberg

    Randomized mechanisms, which map a set of bids to a probability distribution over outcomes rather than a single outcome, are an important but ill-understood area of computational mechanism design. We investigate the role of randomized outcomes (henceforth, "lotteries") in the context of a fundamental and archetypical multi-parameter mechanism design problem: selling heterogeneous items to unit-demand bidders. To what extent can a seller improve her revenue by pricing lotteries rather than items, and does this modification of the problem affect its computational tractability? Our results show that the answers to these questions hinge on whether consumers can purchase only one lottery (the buy-one model) or purchase any set of lotteries and receive an independent sample from each (the buy-many model). In the buy-one model, there is a polynomial-time algorithm to compute the revenue-maximizing envy-free prices (thus overcoming the inapproximability of the corresponding item pricing problem) and the revenue of the optimal lottery system can exceed the revenue of the optimal item pricing by an unbounded factor as long as the number of item types exceeds 4. In the buy-many model with n item types, the profit achieved by lottery pricing can exceed item pricing by a factor of O(log n) but not more, and optimal lottery pricing cannot be approximated within a factor of O(n^eps) for some eps>0, unless NP has subexponential-time randomized algorithms. Our lower bounds rely on a mixture of geometric and algebraic techniques, whereas the upper bounds use a novel rounding scheme to transform a mechanism with randomized outcomes into one with deterministic outcomes while losing only a bounded amount of revenue.

  • http://arxiv.org/abs/0908.0350

    Authors: Siddharth Barman, Shuchi Chawla

    We study a number of multi-route cut problems: given a graph G=(V,E) and connectivity thresholds k_(u,v) on pairs of nodes, the goal is to find a minimum cost set of edges or vertices the removal of which reduces the connectivity between every pair (u,v) to strictly below its given threshold. These problems arise in the context of reliability in communication networks; They are natural generalizations of traditional minimum cut problems where the thresholds are either 1 (we want to completely separate the pair) or infinity (we don't care about the connectivity for the pair). We provide the first non-trivial approximations to a number of variants of the problem including for both node-disjoint and edge-disjoint connectivity thresholds. A main contribution of our work is an extension of the region growing technique for approximating minimum multicuts to the multi-route setting. When the connectivity thresholds are either 2 or infinity (the "2-route cut" case), we obtain polylogarithmic approximations while satisfying the thresholds exactly. For arbitrary connectivity thresholds this approach leads to bicriteria approximations where we approximately satisfy the thresholds and approximately minimize the cost. We present a number of different algorithms achieving different cost-connectivity tradeoffs.

  • http://arxiv.org/abs/1002.5034

    Authors: Eric Bach, Shuchi Chawla, Seeun Umboh

    We consider the following sample selection problem. We observe in an online fashion a sequence of samples, each endowed by a quality. Our goal is to either select or reject each sample, so as to maximize the aggregate quality of the subsample selected so far. There is a natural trade-off here between the rate of selection and the aggregate quality of the subsample. We show that for a number of such problems extremely simple and oblivious "threshold rules" for selection achieve optimal tradeoffs between rate of selection and aggregate quality in a probabilistic sense. In some cases we show that the same threshold rule is optimal for a large class of quality distributions and is thus oblivious in a strong sense.

Pre-2010
  • http://www.cs.wisc.edu/~shuchi/papers/Bertrand-EC09.pdf

    Authors: Shuchi Chawla, Feng Niu

    The Internet is composed of multiple economically-independent service providers that sell bandwidth in their networks so as to maximize their own revenue. Users, on the other hand, route their traffic selfishly to maximize their own utility. How does this selfishness impact the efficiency of operation of the network? To answer this question we consider a two-stage network pricing game where service providers first select prices to charge on their links, and users pick paths to route their traffic. We give tight bounds on the price of anarchy of the game with respect to social value--the total value obtained by all the traffic routed. Unlike recent work on network pricing, in our pricing game users do not face congestion costs; instead service providers must ensure that capacity constraints on their links are satisfied. Our model extends the classic Bertrand game in economics to network settings.

  • http://arxiv.org/abs/0810.0674

    Authors: Siddharth Barman, Shuchi Chawla

    We consider the following “multiway cut packing” problem in undirected graphs: given a graph G = (V, E) and k commodities, each corresponding to a set of terminals located at different vertices in the graph, our goal is to produce a collection of cuts {E1, ⃛, Ek} such that Ei is a multiway cut for commodity i and the maximum load on any edge is minimized. The load on an edge is defined to be the number of cuts in the solution containing the edge. In the capacitated version of the problem the goal is to minimize the maximum relative load on any edge—the ratio of the edge's load to its capacity. Multiway cut packing arises in the context of graph labeling problems where we are given a partial labeling of a set of items and a neighborhood structure over them, and the goal, informally stated, is to complete the labeling in the most consistent way. This problem was introduced by Rabani, Schulman, and Swamy (SODA'08), who developed an O (log n/ log log n) approximation for it in general graphs, as well as an improved O(log2 k) approximation in trees. Here n is the number of nodes in the graph. We present the first constant factor approximation for this problem in arbitrary undirected graphs. Our LP-rounding-based algorithm guarantees a maximum edge load of at most 8OPT + 4 in general graphs. Our approach is based on the observation that every instance of the problem admits a laminar solution (that is, no pair of cuts in the solution crosses) that is near-optimal.

  • http://www.cs.wisc.edu/~shuchi/papers/Bertrand-competition.pdf

    Authors: Shuchi Chawla, Tim Roughgarden

    We study price-of-anarchy type questions in two-sided markets with combinatorial consumers and limited supply sellers. Sellers own edges in a network and sell bandwidth at fixed prices subject to capacity constraints; consumers buy bandwidth between their sources and sinks so as to maximize their value from sending traffic minus the prices they pay to edges. We characterize the price of anarchy and price of stability in these “network pricing” games with respect to two objectives—the social value (social welfare) of the consumers, and the total profit obtained by all the sellers. In single-source single-sink networks we give tight bounds on these quantities based on the degree of competition, specifically the number of monopolistic edges, in the network. In multiple-source single-sink networks, we show that equilibria perform well only under additional assumptions on the network and demand structure.

Undergraduate Courses Graduate Courses
  • Advanced Algorithms. This is a first graduate course in algorithm design meant to fulfill a theory breadth requirement. Some lecture notes from 2009.
  • Algorithmic Mechanism Design. Latest version: Fall 2020.
  • Approximation and Online Algorithms. Latest version: Fall 2019.
  • Algorithmic Game Theory. Latest version: Spring 2011.
  • Beyond Worst-Case Analysis in Algorithm Design. Latest version: Spring 2015.
  • Algorithms for Massive Datasets. Latest version: Fall 2017.
  • Randomized Algorithms. Latest version: Fall 2004.
Lecture notes
    TBA.
Editorial Boards Program Chairing
Program Committees
Organization/steering committees (out of date) Other service commitments (out of date)
  • SIGACT exec committee member-at-large, 2018-2021.
  • Member and current chair of CATCS.