Recent News
Computer Science Colloquium will discuss strategies for sustainable AI data centers
March 10, 2025
Dissertation defense: Alyshia Bustos
March 5, 2025
Computer Science undergraduate honored for cybersecurity research
February 20, 2025
Associate Professor Matt Lakin wins PECASE Award
January 31, 2025
News Archives
[Colloquium] Optimization in the Dark
March 6, 2008
- Date: Thursday, March 6th, 2008
- Time: 11 am — 12:15 pm
- Place: ME 218
Thomas Hayes
Research Assistant Professor TTI Chicago
Consider the following model for sequential decision making in the presence of uncertainty. In each of T rounds, a decision maker must choose an action, which results in some amount of reward (or loss). This process is online: the rewards are initially unknown, but after each action, the decision maker gets the reward for that action as feedback, hopefully enabling better decisions at later times. A key feature of this problem is the need for carefully balancing Exploitation (i.e. making what seems like the best action) versus Exploration (trying an action just to see what happens).
We will focus on the case when the rewards may be viewed as (unknown) linear functions of the action space. This framework includes the classical “K-armed bandit” problem (what is the best way to play K slot machines, or “one-armed bandits”?), but is much more general. In particular, the action space may be much much larger than the dimension of the linear space it is embedded in (for the K-armed bandit, the cardinality of the action space equals the dimension). Traditional algorithms for online linear optimization, based on reduction to the K-armed bandit, suffer performance penalties proportional to the number of actions. In many practical settings, this number is too large by an exponential amount.
I will present some new algorithms, based on techniques from statistical regression, with provably near-optimal reward guarantees.