In this work we study the convergence of generic stochastic search algorithms toward the entire set of approximate solutions of continuous multi-objective optimization problems. Since the dimension of the set of interest is typically equal to the dimension of the parameter space, we focus on obtaining a finite and tight approximation, measured by the Hausdorff distance. Under mild assumptions about the process to generate new candidate solutions, the limit approximation set will be determined entirely by the archiving strategy. We propose and investigate a novel archiving strategy theoretically and empirically. For this, we analyze the convergence behavior of the algorithm, yielding bounds on the obtained approximation quality as well as on the cardinality of the resulting approximation, and present some numerical results.