Displaying 265 - 276 of 376
Format results
-
-
What Are the Statistical Limits of Offline Reinforcement Learning With Function Approximation?
Sham Kakade (University of Washington & Microsoft Research) -
An Alternative Softmax Operator for Reinforcement Learning
Michael Littman (Brown University) -
On the Global Convergence and Approximation Benefits of Policy Gradient Methods
Daniel Russo (Columbia University) -
Corruption Robust Exploration in Episodic Reinforcement Learning
Aleksandrs Slivkins (Microsoft Research NYC) -
Representation Learning and Exploration in Reinforcement Learning
Akshay Krishnamurthy (Microsoft Research) -
Multiplayer Bandit Learning - From Competition to Cooperation
Simina Branzei (Purdue University) -
Multi-Player Multi-Armed Bandit: Can We Still Collaborate at Homes Without "Zoom"?
Yuanzhi Li (Carnegie Mellon University) -
Country-Scale Bandit Implementation for Targeted COVID-19 Testing
Hamsa Bastani (Wharton School of the University of Pennsylvania) -
-
Beating the Curse of Dimensionality in High-Dimensional Optimal Stopping
David Goldberg (Cornell ORIE) -
-
-
What Are the Statistical Limits of Offline Reinforcement Learning With Function Approximation?
Sham Kakade (University of Washington & Microsoft Research)The area of offline reinforcement learning seeks to utilize offline (observational) data to guide the learning of (causal) sequential decision making strategies. The hope is that offline reinforcement learning coupled with function approximation methods (to deal with the curse of dimensionality) can provide a means to help alleviate the excessive sample complexity burden in modern sequential decision making problems. As such, the approach is becoming increasingly important to numerous areas of science, engineering, and technology. However, the extent to which this broader approach can be effective is not well understood, where the literature largely consists of sufficient conditions. This work focuses on the basic question of what are necessary representational and distributional conditions that permit provable sample-efficient offline reinforcement learning. Perhaps surprisingly, our main result shows that even if: i) we have realizability in that the true value function of _every_ policy is linear in a given set of features and 2) our off-policy data has good coverage over all features (under a strong spectral condition), then any algorithm still (information-theoretically) requires a number of offline samples that is exponential in the problem horizon in order to non-trivially estimate the value of _any_ given policy. Our results highlight that sample-efficient, offline policy evaluation is simply not possible unless significantly stronger conditions hold; such conditions include either having low distribution shift (where the offline data distribution is close to the distribution of the policy to be evaluated) or significantly stronger representational conditions (beyond realizability). This is joint work with Ruosong Wang and Dean Foster. -
An Alternative Softmax Operator for Reinforcement Learning
Michael Littman (Brown University)A softmax operator applied to a set of values acts somewhat like the maximization function and somewhat like an average. In sequential decision making, softmax is often used in settings where it is necessary to maximize utility but also to hedge against problems that arise from putting all of one's weight behind a single maximum utility decision. The Boltzmann softmax operator is the most commonly used softmax operator in this setting, but we show that this operator is prone to misbehavior. In this work, we study a differentiable softmax operator that, among other properties, is a non-expansion ensuring a convergent behavior in learning and planning. We introduce a variant of SARSA algorithm that, by utilizing the new operator, computes a Boltzmann policy with a state-dependent temperature parameter. We show that the algorithm is convergent and that it performs favorably in practice. (With Kavosh Asadi.) -
On the Global Convergence and Approximation Benefits of Policy Gradient Methods
Daniel Russo (Columbia University)Policy gradients methods apply to complex, poorly understood, control problems by performing stochastic gradient descent over a parameterized class of polices. Unfortunately, due to the multi-period nature of the objective, policy gradient algorithms face non-convex optimization problems and can get stuck in suboptimal local minima even for extremely simple problems. This talk with discus structural properties – shared by several canonical control problems – that guarantee the policy gradient objective function has no suboptimal stationary points despite being non-convex. Time permitting, I’ll then zoom in on the special case of state aggregated policies and a proof showing that policy gradient converges to better policies than its relative, approximate policy iteration. -
Corruption Robust Exploration in Episodic Reinforcement Learning
Aleksandrs Slivkins (Microsoft Research NYC)We initiate the study of episodic RL under adversarial corruptions in both the rewards and the transition probabilities of the underlying system. Our solution adapts to unknown level of corruption, degrading gracefully in the total corruption encountered. In particular, we attain near-optimal regret for a constant level of corruption. We derive results for "tabular" MDPs as well as MDPs that admit a linear representation. Notably, we provide the first sublinear regret guarantee that goes beyond i.i.d. transitions in the bandit-feedback model for episodic RL. We build on a new framework which combines the paradigms of "optimism under uncertainty" and "successive elimination". Neither paradigm alone suffices: "optimism under uncertainty", common in the current work on stochastic RL, cannot handle corruptions, while "successive elimination" works for bandits with corruptions, but is provably inefficient even for stochastic RL. Joint work Thodoris Lykouris, Max Simchowitz and Wen Sun https://arxiv.org/abs/1911.08689 -
Representation Learning and Exploration in Reinforcement Learning
Akshay Krishnamurthy (Microsoft Research)I will discuss new provably efficient algorithms for reinforcement in rich observation environments with arbitrarily large state spaces. Both algorithms operate by learning succinct representations of the environment, which they use in an exploration module to acquire new information. The first algorithm, called Homer, operates in a block MDP model and uses a contrastive learning objective to learn the representation. On the other hand, the second algorithm, called FLAMBE, operates in a much richer class of low rank MDPs and is model based. Both algorithms accommodate nonlinear function approximation and enjoy provable sample and computational efficiency guarantees. -
Multiplayer Bandit Learning - From Competition to Cooperation
Simina Branzei (Purdue University)The stochastic multi-armed bandit model captures the tradeoff between exploration and exploitation. We study the effects of competition and cooperation on this tradeoff. Suppose there are k arms and two players, Alice and Bob. In every round, each player pulls an arm, receives the resulting reward, and observes the choice of the other player but not their reward. Alice's utility is ΓA+λΓB (and similarly for Bob), where ΓA is Alice's total reward and λ∈[−1,1] is a cooperation parameter. At λ=−1 the players are competing in a zero-sum game, at λ=1, they are fully cooperating, and at λ=0, they are neutral: each player's utility is their own reward. The model is related to the economics literature on strategic experimentation, where usually players observe each other's rewards. With discount factor β, the Gittins index reduces the one-player problem to the comparison between a risky arm, with a prior μ, and a predictable arm, with success probability p. The value of p where the player is indifferent between the arms is the Gittins index g=g(μ,β)>m, where m is the mean of the risky arm. We show that competing players explore less than a single player: there is p∗∈(m,g) so that for all p>p∗, the players stay at the predictable arm. However, the players are not myopic: they still explore for some p>m. On the other hand, cooperating players explore more than a single player. We also show that neutral players learn from each other, receiving strictly higher total rewards than they would playing alone, for all p∈(p∗,g), where p∗ is the threshold from the competing case. Finally, we show that competing and neutral players eventually settle on the same arm in every Nash equilibrium, while this can fail for cooperating players. This is based on a joint work with Yuval Peres. -
Multi-Player Multi-Armed Bandit: Can We Still Collaborate at Homes Without "Zoom"?
Yuanzhi Li (Carnegie Mellon University)Multi-armed bandit is a well-established area in online decision making: Where one player makes sequential decisions in a non-stationary environment to maximize his/her accumulative rewards. The multi-armed bandit problem becomes significantly more challenging when there are multiple players in the same environment, while only one piece of reward is presented at a time for each arm. In this setting, if two players pick the same arm at the same round, they are only able to get one piece of reward instead of two. To maximize the reward, players need to collaborate to avoid "collision" -- i.e. they need to make sure that they do not all rush to the same arm (even if it has the highest reward). We consider the even more challenging setting where communications between players are completely disabled: e.g. they are separated in different places of the world without any "Zoom". We show that nearly optimal regret can still be obtained in this setting: Players can actually collaborate in a non-stationary environment without any communication (of course, without quantum entanglement either :)) -
Country-Scale Bandit Implementation for Targeted COVID-19 Testing
Hamsa Bastani (Wharton School of the University of Pennsylvania)In collaboration with the Greek government, we use machine learning to manage the threat of COVID-19. With tens of thousands of international visitors every day, Greece cannot test each visitor to ensure that they are not a carrier of COVID-19. We developed a bandit policy that balances allocating scarce tests to (i) continuously monitor the dynamic infection risk of passengers from different locations (exploration), and (ii) preferentially target risky tourist profiles for testing (exploitation). Our solution is currently deployed across all ports of entry to Greece. I will describe a number of technical challenges, including severely imbalanced outcomes, batched/delayed feedback, high-dimensional arms, port-specific testing constraints, and transferring knowledge from (unreliable) public epidemiological data. Joint work with Kimon Drakopoulos and Vishal Gupta. -
Online Learning via Offline Greedy Algorithms: Applications in Market Design and Optimization
Negin Golrezaei (MIT)Motivated by online decision-making in time-varying combinatorial environments, we study the problem of transforming offline algorithms to their online counterparts. We focus on offline combinatorial problems that are amenable to a constant factor approximation using a greedy algorithm that is robust to local errors. For such problems, we provide a general framework that efficiently transforms offline robust greedy algorithms to online ones using Blackwell approachability. We show that the resulting online algorithms have O(T^{1/2}) (approximate) regret under the full information setting. We further introduce a bandit extension of Blackwell approachability that we call Bandit Blackwell approachability. We leverage this notion to transform greedy robust offline algorithms into an O(T^{2/3})(approximate) regret in the bandit setting. Demonstrating the flexibility of our framework, we apply our offline-to-online transformation to several problems at the intersection of revenue management, market design, and online optimization, including product ranking optimization in online platforms, reserve price optimization in auctions, and submodular maximization. We show that our transformation, when applied to these applications, leads to new regret bounds or improves the current known bounds. -
Beating the Curse of Dimensionality in High-Dimensional Optimal Stopping
David Goldberg (Cornell ORIE)The sample and computational complexity of reinforcement learning algorithms under the generative model has been a central focus of the literature. Most existing approaches which can guarantee epsilon-optimality require a complexity either dependent on the size of the state space, or which scales exponentially in the time horizon T, in both cases suffering from the curse of dimensionality. Other approaches may be more efficient, but typically either make strong structural assumptions, or do not come with optimality guarantees. We reconcile this dilemma for the celebrated problem of high-dimensional optimal stopping, a special case of RL with important applications to financial engineering, operations research, economics and computer science, and well-known to be computationally challenging when the state space and time horizon are large. We propose the first algorithm with sample and computational complexity polynomial in T, and effectively independent of the underlying state space, which outputs epsilon-optimal stopping policies and value function approximations. Our algorithm is inspired by a novel connection between network flow theory in combinatorial optimization and the classical martingale duality theory for stopping problems, and can be viewed as a higher-order generalization of the celebrated prophet inequalities. Our approach yields an expansion for the value functions as an infinite sum, which is fundamentally different from standard expansions based on backwards induction, with the following properties : 1. each term is the expectation of an elegant recursively defined infimum; 2. the first k terms only require T^k samples to approximate; and 3. truncating at the first k terms yields a (normalized) error of 1/k. The terms of the associated sum can also be viewed as a provably good set of basis functions, which can be efficiently evaluated by simulation, in contrast with past approximate dynamic programming approaches which take a heuristic approach to deriving basis functions. Our work shows that although the state space is exponentially large, one need only explore it to the extent that one can compute these (few) basis functions, which can be done efficiently. Time permitting, we will also explore applications of our approach to other high-dimensional RL problems (including those arising in dynamic pricing and online combinatorial optimization), and discuss connections to parallel and quantum computing. Joint work with Ph.D. student Yilun Chen. -
Learning Outcomes in Queueing Systems
Eva Tardos (Cornell)Over the last two decades we have developed good understanding how to quantify the impact of strategic user behavior on overall performance in many games (including traffic routing as well as online auctions), and showed that the resulting bounds extend to repeated games assuming players use a form of no-regret learning that helps them adapt to the environment. In this talk we will study this phenomenon in the context of a game modeling queuing systems: routers compete for servers, where packets that do not get service will be resent at future rounds, resulting in a system where the number of packets at each round depends on the success of the routers in the previous rounds. In joint work with Jason Gaitonde, we analyze the resulting highly dependent random process and find that if the capacity of the servers is high enough to allow a centralized and knowledgeable scheduler to get all packets served even with double the packet arrival rate, then learning can help the queues in coordinating their behavior, the expected number of packets in the queues will remain bounded throughout time, assuming older packets have priority.