Multi-objectivization is the process of reformulating a single-objective problem into a multi-objective problem and solving it with a multi-objective method in order to provide a solution to the original single-objective problem. Multi-objectivization differs from other traditional divide-and-conquer techniques the method splits the objective function rather than the search space. Prior to recent evidence, such reformulations were thought to make a problem more complex. However, more recent research suggests that decomposition in the objective space can lead to useful optimization techniques when coupled with a population-based search. Machine based optimization techniques are varied but often based upon an analogy to real-world phenomena. A family and sub-family of algorithms called evolutionary and genetic algorithms respectively are inspired by Darwins survival-of-the-fittest theory and the concept of modeling evolutionary process. These algorithms manage a population of solutions in a global search process that recombines and creates new solutions in order to generate useful solutions to hard problems. A sub-type of Evolutionary Algorithms (EAs) called Multiple Objective Evolutionary Algorithms (MOEAs) are designed to search for solutions to problems formulated with multiple-objectives. Multiple-objective problems have two or more partially competing objectives such as minimization of cost and maximization of safety. The application of MOEAs to solve problems that in their most natural formulation have a single objective is relatively new. This work investigates Genetic Algorithms (GAs) and their close relatives MOEAs in both a general categorical sense and as they are applied to multi-objectivization. A diversity classification framework for GAs is proposed. Furthermore, multi-objectivization techniques are examined. Through study of an abstract problem, job-shop scheduling problems, and the Traveling Salesman Problem, principles governing the design decisions for multi-objectivization are identified. Two ways in which multi-objectivization creates beneficial search results are theorized: a theory of how multi-objectivization effectively pairs fitness improvements with fitness decrements is developed and a theory for how multi-objectivization circumvents non-linear interactions is proposed. Evidence for both of these beneficial effects is provided through a series of computational experiments. Two prevalent multi-objectivization techniques are compared both analytically and through these experiments. A third, more general version of the studied techniques is proposed with results showing robust performance across a variety of computational budgets.