Most real world problems are multiobjective. Usually, traditional nonlinear multiobjective optimization techniques are computationally expensive. Consequently, it is difficult to obtain solutions in polynomial time if we increase the complexity of the problem. Furthermore, traditional mathematical programming techniques are normally highly susceptible to the shape or continuity of the Pareto front. Therefore, alternative ways of thinking are needed, new algorithms - evolutionary computation. Particle swarm optimization (PSO) is a relatively recent evolutionary optimization heuristic that has been found to be very successful in a wide variety of optimization tasks. Its high speed of convergence and its relative simplicity make PSO a highly viable candidate to be used for solving not only problems with a single objective function, but also multiobjective optimization problems. However, PSO lacks an explicit mechanism to manage multiple objectives. In this dissertation, we analyze and extend the PSO algorithm to solve "efficiently" multiobjective optimization problems. Our research is divided into three main components: 1) a proposal which extends PSO to handle multiple objectives. The main novelty of the approach consists on using a clustering technique in order to divide the population of particles into several subswarms in variable space. Such modification, significantly improves the quality of the Pareto fronts produced, since in each subswarm emerges a local search behavior. Also, in order to reduce the non-dominated set, we propose an additional approach to decide whether a solution is accepted or not. 2) We present a mechanism to handle constraints with PSO. Our proposal uses a simple criterion based on closeness of a particle to the feasible region in order to select a leader. Our comparison of results indicates that the proposed approach is highly competitive with respect to three constraint-handling techniques representative of the state-of-the-art in the area. This constraint-handling approach was implemented into our multiobjective particle swarm optimization algorithm (MOPSO). 3) Finally, in order to improve the general performance of the algorithm, we performed an study about the MOPSO's parameters. Then, we proposed a self-adaptation scheme to select the best parameters' values; such proposal was validated using several test functions and metrics taken from the standard literature on evolutionary multiobjective optimization. The results indicate that our approach is a viable alternative since it outperformed some of the best multiobjective evolutionary algorithms known to date.