Automated design of artificial neural networks by evolutionary algorithms (neuroevolution) has generated much recent research both because successful approaches will facilitate wide-spread use of intelligent systems based on neural networks, and because it will shed light on our understanding of how “real” neural networks may have evolved. The main challenge in neuroevolution is that the search space of neural network architectures and their corresponding optimal weights can be high-dimensional and disparate, and therefore evolution may not discover an optimal network even if it exists. In this dissertation, I present a high-level encoding language that can be used to restrict the general search space of neural networks, and implement a problemindependent design system based on this encoding language. I show that this encoding scheme works effectively in 1) describing the search space in which evolution occurs; 2) specifying the initial configuration and evolutionary parameters; and 3) generating the final neural networks resulting from the evolutionary process in a human-readable manner. Evolved networks for “n-partition problems” demonstrate that this approach can evolve high-performance network architectures, and show by example that a small parsimony factor in the fitness measure can lead to the emergence of modular networks. Further, this approach is shown to work for encoding recurrent neural networks for a temporal sequence generation problem, and the tradeoffs between various recurrent network architectures are systematically compared via multi-objective optimization. Finally, it is shown that this system can be extended to address reinforcement learning problems by evolving architectures and connection weights in a hierarchical manner. Experimental results support the conclusion that hierarchical evolutionary approaches integrated in a system having a high-level descriptive encoding language can be useful in designing modular networks, including those that have recurrent connectivity.