Beyond task diversity - Provable representation transfer for sequential multi-task linear bandits

We study lifelong learning in linear bandits, where a learner interacts with a sequence of linear bandit tasks whose parameters lie in an m-dimensional subspace of \(\mathbb{R}^d\), thereby sharing a low-rank representation. Current literature typically assumes that the tasks are diverse, i.e., their parameters uniformly span the m-dimensional subspace. This assumption allows the low-rank representation to be learned before all tasks are revealed, which can be unrealistic in real-world applications. In this work, we present the first nontrivial result for sequential multi-task linear bandits without the task diversity assumption. We develop an algorithm that efficiently learns and transfers low-rank representations. When facing N tasks, each played over \(\tau\) rounds, our algorithm achieves a regret guarantee of \(\tilde{O}\big (Nm \sqrt{\tau} + N^{\frac{2}{3}} \tau^{\frac{2}{3}} d m^{\frac13} + Nd^2 + \tau m d \big)\) under the ellipsoid action set assumption.

Read More

Non-stationary Bandits and Meta-Learning with a Small Set of Optimal Arms

We study a meta-learning problem where the learner faces a sequence of N multi-armed bandit tasks. Each task is a K-armed bandit problem of horizon T that may be designed by an adversary, but the adversary is constrained to choose the optimal arm of each task in a smaller (but unknown) subset of M arms. An effective strategy is expected to learn the common structure and exploit this knowledge in solving the future tasks. We show an algorithm with a worst-case regret bounded as \(\widetilde O(N\sqrt{MT} + T\sqrt{K M N})\). We also show that at the cost of an extra \(O(\log(N))\) term, the problem can be solved in a computationally efficient way.

Read More