A stochastic multi-objective multi-armed bandit problem is a particular type of multi-objective (MO) optimization problems where the goal is to find and play fairly the optimal arms. To solve the multi-objective optimization problem, we propose annealing linear scalarized algorithm that transforms the MO optimization problem into a single one by using a linear scalarization function, and finds and plays fairly the optimal arms by using a decaying parameter εt. We compare empirically linear scalarized-UCB1 algorithm with the annealing linear scalarized algorithm on a test suit of multi-objective multi-armed bandit problems with independent Bernoulli distributions using different approaches to define weight sets. We used the standard approach, the adaptive approach and the genetic approach. We conclude that the performance of the annealing scalarized and the scalarized UCB1 algorithms depend on the used weight approach.