Multiobjective evolutionary algorithms (MOEAs) usually achieve a set of nondominated solutions as the approximation of the Pareto front. In order to utilize the solutions, a final decision making process is indispensable in most cases in which a small number of solutions have to be selected. In this process a decision maker selects the solutions according to his or her preferences or based on the knowledge acquired by observing the approximated Pareto front. Due to the limited number of solutions an algorithm can obtain, in particular when the number of objectives is large, a decision maker may be interested in sampling additional solutions in some preferred regions. This paper proposes to use a reference vector based preference articulation (RVPA) method to obtain such additional solutions in preferred regions. After describing the proposed method in detail, experiments are conducted on six benchmark MOPs to assess the performance of the proposed RVPA method. Our empirical results show that, by setting reference vectors in the objective space, the proposed RVPA is able to obtain corresponding solutions in the preferred regions at a much lower cost compared to e.g. a re-start strategy. In addition, by setting the reference vectors in a uniform way, the proposed RVPA method is also able to improve the general quality (convergence and distribution) of the solutions obtained by an MOEA.