Browsing by Author "Conitzer, Vincent"
Results Per Page
Sort Options
Item Open Access Adapting a Kidney Exchange Algorithm to Incorporate Human Values(2017-05-04) Freedman, RachelArtificial morality is moral behavior exhibited by automated or artificially intelligent agents. A primary goal of the field of artificial morality is to design artificial agents that act in accordance with human values. One domain in which computer systems make such value decisions is that of kidney exchange. In a kidney exchange, incompatible patient-donor pairs exchange kidneys with other incompatible pairs instead of waiting for cadaver kidney transplants. This allows these patients to be removed from the waiting list and to receive live transplants, which typically have better outcomes. When the matching of these pairs is automated, algorithms must decide which patients to prioritize. In this paper, I develop a procedure to align these prioritization decisions with human values. Many previous attempts to impose human values on artificial agents have relied on the “top-down approach” of defining a coherent framework of rules for the agent to follow. Instead, I develop my value function by gathering survey participant responses to relevant moral dilemmas, using machine learning to approximate the value system that these responses are based on, and then encoding these into the algorithm. This “bottom-up approach” is thought to produce more accurate, robust, and generalizable moral systems. My method of gathering, analyzing, and incorporating public opinion can be easily generalized to other domains. Its success here therefore suggests that it holds promise for the future development of artificial morality.Item Open Access Approximately Optimal Mechanisms With Correlated Buyer Valuations(2013) Albert, Michael JosephCremer and McLean 1985 shows that if buyers valuations are suciently correlated, there is a mechanism that allows the seller to extract the full surplus from the buyers. However, in practice, we do not see the Cremer-McLean mechanism employed. In this thesis, I demonstrate that one reason that the Cremer-McLean mechanism
is not implemented in practice is because the mechanism requires very precise assumptions about the underlying distributions of the buyers. I demonstrate that a small mis-estimation of the underlying distribution can have large and signicant effects on the outcome of the mechanism. I further prove that the Cremer-McLean mechanism cannot be approximated by a simple second price auction, i.e. there is no approximating factor when using a second price auction with reserve in either outcome or expectation for the Cremer-McLean mechanism. Further, I show that there is no mechanism that approximates the Cremer-McLean mechanism for bidders with
regular distributions in a single item auction if the correlation among buyers is not considered. Finally, I introduce a new mechanism that is robust to distribution mis-estimation and show empirically that it outperforms the Cremer-McLean mechanism on average in cases of distribution mis-estimation and I show that the mechanism can
be determined in polynomial time in the number of types of the buyers.
Item Open Access Computational Aspects of Stackelberg Games(2013) Letchford, JoshuaGame theory involves the study of how self-interested agents interact in various settings. Perhaps the best-known game theoretic solution concept is that of Nash equilibrium. However, Nash equilibrium makes a number of assumptions, such as rationality of all of the players and the ability of the players to choose the same equilibrium when more than one exists. Because of these assumptions, it is unclear if simply solving for Nash equilibrium is always the correct thing to do. In some settings, one player can instead credibly commit to a strategy, and communicate this to the other player, before the other player can make a decision. This has become known as a Stackelberg game. Computing optimal strategies to commit to in normal-form or Bayesian Stackelberg games is a topic that has recently been gaining attention, in part due to the application of such algorithms in various security and law enforcement scenarios. My work on Stackelberg games falls into three main areas.
First, I focus on general games, where we give efficient algorithms and hardness results for Bayesian, extensive-form and stochastic games. In each of these settings we study the relationship between different modeling assumptions and the tractability of finding an optimal strategy to commit to. For Bayesian games our results are mainly negative; we show that not only are the problems here NP-hard, but in many cases they are also inapproximable. Our results for extensive-form games are more mixed; we are able to give polynomial time algorithms for four cases. However, we also show that if we relax the assumptions made in these four cases, then the problem usually becomes NP-hard. Finally, our results for stochastic games are again somewhat negative, as we show that the problem is NP-hard is most reasonable cases. However, here we are also able to give an approximation algorithm to compute optimal commitment strategies in a setting where correlation is allowed.
I next focus on Stackelberg security games. Stackelberg security games usually involve the scheduling of scarce defense resources to cover some subset of potential targets. We first study methods for going from marginal solutions (which ignore overlapping coverage between different schedules) to valid mixed commitment strategies in graphical settings. Here we are able to characterize two new classes of games where mixed strategies corresponding to the marginal probabilities are guaranteed to exist, and give algorithms for finding them. Next, we offer a simple model of interdependencies between nodes in a network based on probabilistic failure cascades, extending the well-known independent cascade model of the spread of infectious diseases or ideas. We give an algorithm for this problem and experimentally show that this algorithm scales to realistic security settings and outperforms the state of-the-art alternatives. Finally, we create an approach for optimal interdiction of attack plans. We show how to model an attack plan interdiction problem as a large-scale integer linear program similar to an integer programming approach for solving partial satisfaction planning problems. We also give several oracle-based approaches for solving this and then evaluate them experimentally.
Third, I analyze how much benefit a player can derive from commitment in various types of games, in a quantitative sense that is similar to known concepts such as the value of mediation and the price of anarchy. To do this we introduce and study the value of pure commitment (the benefit of committing to a pure strategy), the value of mixed commitment (the benefit of committing to a mixed strategy), and the mixed vs. pure commitment ratio (how much can be gained by committing to a mixed strategy rather than a pure one).
Item Open Access Computational Voting Theory: Game-Theoretic and Combinatorial Aspects(2011) Xia, LirongFor at least two thousand years, voting has been used as one of the most effective ways to aggregate people's ordinal preferences. In the last 50 years, the rapid development of Computer Science has revolutionize every aspect of the world, including voting. This motivates us to study (1) conceptually, how computational thinking changes the traditional voting theory, and (2) methodologically, how to better use voting for preference/information aggregation with the help of Computer
Science.
My Ph.D. work seeks to investigate and foster the interplay between Computer Science and Voting Theory. In this thesis, I will discuss two specific research directions pursued in my Ph.D. work, one for each question asked above. The first focuses on investigating how computational thinking affects the game-theoretic aspects of voting. More precisely, I will discuss the rationale and possibility of using computational complexity to protect voting from a type of strategic behavior of the voters, called manipulation. The second studies a voting setting called Combinatorial Voting, where the set of alternative is exponentially large and has a combinatorial structure. I will focus on the design and analysis of novel voting rules for combinatorial voting that balance computational efficiency and the expressivity of the voting language, in light of some recent developments in Artificial Intelligence.
Item Open Access Computationally Feasible Approaches to Automated Mechanism Design(2010) Guo, MingyuIn many multiagent settings, a decision must be made based on the preferences of multiple agents, and agents may lie about their preferences if this is to their benefit. In mechanism design, the goal is to design procedures (mechanisms) for making the decision that work in spite of such strategic behavior, usually by making untruthful behavior suboptimal. In automated mechanism design, the idea is to computationally search through the space of feasible mechanisms, rather than to design them analytically by hand. Unfortunately, the most straightforward approach to automated mechanism design does not scale to large instances, because it requires searching over a very large space of possible functions. In this thesis, we adopt an approach to automated mechanism design that is computationally feasible. Instead of optimizing over all feasible mechanisms, we carefully choose a parameterized subfamily of mechanisms. Then we optimize over mechanisms within this family. Finally, we analyze whether and to what extent the resulting mechanism is suboptimal outside the subfamily. We apply (computationally feasible) automated mechanism design to three resource allocation mechanism design problems: mechanisms that redistribute revenue, mechanisms that involve no payments at all, and mechanisms that guard against false-name manipulation.
Item Open Access Dynamic Mechanism Design in Complex Environments(2020) Deng, YuanInspired by various applications including ad auctions, matching markets, and voting, mechanism design deals with the problem of designing algorithms that take inputs from strategic agents and return an outcome optimizing a given objective, while taking the strategic behavior from the agents into account.
The focus of this thesis is to design mechanisms in dynamic environments that take into account rich constraints (e.g., budget constraints), features (e.g., robustness and credibility), and different types of agents (e.g., utility-maximizing agents and learning agents). Two main reasons why dynamic mechanism design is hard compared to mechanism design in a static environment are the need to make decisions in an online manner while the future might be unpredictable or even be chosen by an adversary arbitrarily, and the need to cope with strategic agents, who aim to maximize their cumulative utilities by looking into the future.
We propose a framework to design dynamic mechanisms with simple structures for utility-maximizing agents without losing any optimality, which facilitates both the design for the designer and the participation for the agents. Our framework enables the design of mechanisms achieving non-trivial performance guarantees relative to the optimal mechanism that has access to all future information in advance, even though our mechanisms are not equipped with any knowledge about the future. We further develop a class of dynamic mechanisms that are robust against estimation errors in agents' valuation distributions, a class of dynamic mechanisms that are credible so that the designer is incentivized to follow the rules, and a class of dynamic mechanisms for learning agents. In addition to dynamic mechanism design frameworks, we develop statistical tools to test whether a dynamic mechanism has correctly aligned the agents' incentives, and to measure the extent of the misalignment if it exists.
Item Open Access Eliciting and Aggregating Information for Better Decision Making(2018) Freeman, RupertIn this thesis, we consider two classes of problems where algorithms are increasingly used to make, or assist in making, a wide range of decisions. The first class of problems we consider is the allocation of jointly owned resources among a group of agents, and the second is the elicitation and aggregation of probabilistic forecasts from agents regarding future events. Solutions to these problems must trade off between many competing objectives including economic efficiency, fairness between participants, and strategic concerns.
In the first part of the thesis, we consider shared resource allocation, where we relax two common assumptions in the fair divison literature. Firstly, we relax the assumption that goods are private, meaning that they must be allocated to only a single agent, and introduce a more general public decision making model. This allows us to incorporate ideas and techniques from fair division to define novel fairness notions in the public decisions setting. Second, we relax the assumption that decisions are made offline, and instead consider online decisions. In this setting, we are forced to make decisions based on limited information, while seeking to retain fairness and game-theoretic desiderata.
In the second part of the thesis, we consider the design of mechanisms for forecasting. We first consider a tradeoff between several desirable properties for wagering mechanisms, showing that the properties of Pareto efficiency, incentive compatibility, budget balance, and individual rationality are incompatible with one another. We propose two compromise solutions by relaxing either Pareto efficiency or incentive compatibility. Next, we consider the design of decentralized prediction markets, which are defined by the lack of any single trusted authority. As a consequence, markets must be closed by popular vote amongst a group of anonymous, untrusted arbiters. We design a mechanism that incentivizes arbiters to truthfully report their information even when they have a (possibly conflicting) stake in the market themselves.
Item Open Access Essays on Identification and Promotion of Game-Theoretic Cooperation(2018) Moon, CatherineThis dissertation looks at how to identify and promote cooperation in a multiagent system, first theoretically through the lens of computational game theory and later empirically through a human subject experiment. Chapter 2 studies the network dynamics leading to a potential unraveling of cooperation and identify the subset of agents that can form an enforceable cooperative agreement with. This is an important problem, because cooperation is harder to sustain when information of defection, and thus the consequent punishment, transfers slowly through the network structures from a larger community. Chapter 3 examines a model that studies cooperation in a broader strategic context where agents may interact in multiple different domains, or games, simultaneously. Even if a game independently does not give an agent sufficient incentive to play the cooperative action, there may be hope for cooperation when multiple games with compensating asymmetries are put together. Exploiting compensating asymmetries, we can find an institutional arrangement that would either ensure maximum incentives for cooperation or require minimum subsidy to establish sufficient incentives for cooperation. Lastly, Chapter 4 studies a two-layered public good game to empirically examine whether community enforcement through existing bilateral relationships can encourage cooperation in a social dilemma situation. Here, it is found that how the situation is presented matters greatly to real life agents, as their understanding of whether they are in a cooperative or a competitive, strategic setting changes the level of overall cooperation.
Item Open Access Essays on Privacy, Information, and Anonymous Transactions(2009) Wagman, LiadThis dissertation uses game theoretic models to examine the effects of agent anonymity on markets for goods and for information. In open, anonymous settings, such as the Internet, anonymity is relatively easy to obtain --- oftentimes another email address is sufficient. By becoming anonymous, agents can participate in various mechanisms (such as elections, opinion polls, auctions, etc.) multiple times. The first chapter (joint work with Vincent Conitzer) studies elections that disincentivize voters from voting multiple times. A voting rule is false-name-proof if no agent ever benefits from casting additional votes. In elections with two alternatives, it is shown that there is a unique false-name-proof voting rule that is most responsive to votes. The probability that this rule selects the majority winner converges to 1 as the population grows large. Methods to design analogous rules for elections with 3 or more alternatives are proposed. The second chapter (also joint work with Vincent Conitzer) extends the analysis in the first chapter to broader mechanism design settings, where the goal is to disincentivize agents from participating multiple times. The cost model from the first chapter is generalized and revelation principles are proven. The third chapter studies a setting where firms are able to recognize their previous customers, and may use information about consumers' purchase histories to price discriminate (which may incentivize consumers to be anonymous). The formal model considers a monopolist and a continuum of heterogeneous consumers, where consumers are able to maintain their anonymity at some cost. It is shown that when consumers can costlessly maintain their anonymity, they all individually choose to do so, which paradoxically results in the highest profit for the monopolist. Increasing the cost of anonymity can benefit consumers, but only up to a point; at that point, the effect is reversed. Some of the results are extended to a setting with two competing firms selling differentiated products. Finally, the cost of maintaining anonymity is endogenized by considering a third party that can make consumers anonymous for a fee of its choosing. It is shown that this third party would prefer to be paid by the firm for allowing consumers to costlessly maintain their anonymity.
Item Open Access Foreword(In Good And Generous Faith, 2006) Cox, J; Schwarcz, SItem Open Access Game-Theoretically Allocating Resources to Catch Evaders and Payments to Stabilize Teams(2016) Li, YuqianAllocating resources optimally is a nontrivial task, especially when multiple
self-interested agents with conflicting goals are involved. This dissertation
uses techniques from game theory to study two classes of such problems:
allocating resources to catch agents that attempt to evade them, and allocating
payments to agents in a team in order to stabilize it. Besides discussing what
allocations are optimal from various game-theoretic perspectives, we also study
how to efficiently compute them, and if no such algorithms are found, what
computational hardness results can be proved.
The first class of problems is inspired by real-world applications such as the
TOEFL iBT test, course final exams, driver's license tests, and airport security
patrols. We call them test games and security games. This dissertation first
studies test games separately, and then proposes a framework of Catcher-Evader
games (CE games) that generalizes both test games and security games. We show
that the optimal test strategy can be efficiently computed for scored test
games, but it is hard to compute for many binary test games. Optimal Stackelberg
strategies are hard to compute for CE games, but we give an empirically
efficient algorithm for computing their Nash equilibria. We also prove that the
Nash equilibria of a CE game are interchangeable.
The second class of problems involves how to split a reward that is collectively
obtained by a team. For example, how should a startup distribute its shares, and
what salary should an enterprise pay to its employees. Several stability-based
solution concepts in cooperative game theory, such as the core, the least core,
and the nucleolus, are well suited to this purpose when the goal is to avoid
coalitions of agents breaking off. We show that some of these solution concepts
can be justified as the most stable payments under noise. Moreover, by adjusting
the noise models (to be arguably more realistic), we obtain new solution
concepts including the partial nucleolus, the multiplicative least core, and the
multiplicative nucleolus. We then study the computational complexity of those
solution concepts under the constraint of superadditivity. Our result is based
on what we call Small-Issues-Large-Team games and it applies to popular
representation schemes such as MC-nets.
Item Open Access Non-parametric approximate linear programming for MDPs(2012) Pazis, JasonOne of the most difficult tasks in value function based methods for learning in Markov Decision Processes is finding an approximation architecture that is expressive enough to capture the important structure in the value function, while at the same time not overfitting the training samples. This thesis presents a novel Non-Parametric approach to Approximate Linear Programming (NP-ALP), which requires nothing more than a smoothness assumption on the value function. NP-ALP can make use of real-world, noisy sampled transitions rather than requiring samples from the full Bellman equation, while providing the first known max-norm, finite sample performance guarantees for ALP under mild assumptions. Additionally NP-ALP is amenable to problems with large (multidimensional) or even infinite (continuous) action spaces, and does not require a model to select actions using the resulting approximate solution.Item Open Access Security Games: Solution Concepts and Algorithms(2013) Korzhyk, DmytroAlgorithms for finding game-theoretic solutions are now used in several real-world security applications. Many of these applications are based on different but related game-theoretical models collectively known as security games. Much of the research in this area has focused on the two-player setting in which the first player (leader, defender) commits to a strategy, after which the second player (follower, attacker) observes that strategy and responds to it. This is commonly known as the Stackelberg, or leader-follower, model. If none of the players can observe the actions of the others then such a setting is called a simultaneous-move game. A common solution concept in simultaneous-move games is the Nash equilibrium (NE). In the present dissertation, we contribute to this line of research in two ways.
First, we consider new ways of modeling commitment. We propose the new model in which the leader can commit to a correlated strategy. We show that this model is equivalent to the Stackelberg model in two-player games and is different from the existing models in games with three or more players. We propose an algorithm for computing a solution to this model in polynomial time. We also consider a leader-follower setting in which the players are uncertain about whether the follower can observe. We describe an iterative algorithm for solving such games.
Second, we analyze the computational complexity of computing Stackelberg and NE strategies in security games. We describe algorithms to solve some variants of the security game model in polynomial time and prove NP-hardness of solving other variants of the model. We also extend the family of security games by allowing the attacker have multiple resources. We provide an algorithm for computing an NE of such games in polynomial time, and we show that computing a Stackelberg strategy is NP-hard.
Item Open Access The Revelation Principle for Mechanism Design with Signaling Costs(2017) Kephart, AndrewThe revelation principle is a key tool in mechanism design. It allows the designer to restrict attention to the class of truthful mechanisms, greatly facilitating analysis. This is also borne out in an algorithmic sense, allowing certain computational prob- lems in mechanism design to be solved in polynomial time. Unfortunately, when not every type can misreport every other type (the partial verification model), or—more generally—misreporting can be costly, the revelation principle can fail to hold. This also leads to NP-hardness results.
The primary contribution of this work consists of characterizations of conditions under which the revelation principle still holds when reporting can be costly. (These are generalizations of conditions given earlier for the partial verification case) In fact, our results extend to cases where, instead of being able to report types directly, agents may be limited to sending signals that do not directly correspond to types. In this case, we obtain conditions for when the mechanism designer can restrict attention to a given (but arbitrary) mapping from types to signals without loss of generality. We also study associated computational problems.
Item Open Access Who Benefits from Online Privacy?(2008) Conitzer, Vincent; Taylor, Curtis R; Wagman, LiadWhen firms can identify their past customers, they may use information about purchase histories in order to price discriminate. We present a model with a monopolist and a continuum of heterogeneous consumers, where consumers can opt out from being identified, possibly at a cost. We find that when consumers can costlessly opt out, they all individually choose privacy, which results in the highest profit for the monopolist. In fact, all consumers are better off when opting out is costly. When valuations are uniformly distributed, social surplus is non-monotonic in the cost of opting out and is highest when opting out is prohibitively costly. We introduce the notion of a privacy gatekeeper - a third party that is able to act as a privacy conduit and set the cost of opting out. We prove that the privacy gatekeeper only charges the firm in equilibrium, making privacy costless to consumers.