In this thesis, we present our investigation and developments of neural network ensembles, which have attracted a lot of research interests in machine learning and have many fields of applications. More speci- fically, the thesis focuses on two important factors of ensembles: the diversity among ensemble members and the regularization. Firstly, we investigate the relationship between diversity and general- ization for classification problems to explain the conflicting opinions on the effect of diversity in classifier ensembles. This part proposes an ambiguity decomposition for classifier ensembles and introduces an ambiguity term, which is part of ambiguity decomposition, as a new measure of diversity. The empirical experiments confirm that ambi- guity has the largest correlation with the generalization error in com- parison with other nine most-often-used diversity measures. Then, an empirical investigation on the relationship between diversity and generalization has been conducted. The results show that diversity highly correlates with the generalization error only when diversity is low, and the correlation decreases when the diversity exceeds a thresh- old. These findings explain the empirical observations on whether or not diversity correlates with the generalization error of ensembles. Secondly, this thesis investigates a special kind of diversity, error diver- sity, using negative correlation learning (NCL) in detail, and discovers that regularization should be used to address the overfitting problem of NCL. Although NCL has showed empirical success in creating neu- ral network ensembles by emphasizing the error diversity, with the lack of a solid understanding of its dynamics we observe it is prone to overfitting and we engage in a theoretical and empirical investi- gation to improve its performance by proposing regularized negative correlation learning (RNCL) algorithm. RNCL imposes an additional regularization term to the error function of the ensemble and then decomposes the ensemble's training objectives into individuals' ob- jectives. This thesis provides a Bayesian formulation of RNCL and imple- ments RNCL by two techniques: gradient descent with Bayesian In- ference and evolutionary multiobjective algorithm. The numerical results demonstrate the superiority of RNCL. In general, RNCL can be viewed as a framework, rather than an algorithm itself, meaning several other learning techniques could make use of it. Finally, we investigate ensemble pruning as one way to balance di- versity, regularization and accuracy, and we propose one probabilistic ensemble pruning algorithm in this thesis. We adopt a left-truncated Gaussian prior for this probabilistic model to obtain a set of sparse and non-negative combination weights. Due to the intractable integral by incorporating the prior, expectation propagation (EP) is employed to approximate the posterior estimation of the weight vector, where an estimate of the leave-one-out (LOO) error can be obtained without extra computation. Therefore, the LOO error is used together with Bayesian evidence for model selection. An empirical study shows that our algorithm utilizes far less component learners but performs as well as, or better than, the non-pruned ensemble. The results are also positive when EP pruning algorithm is used to select the classifiers from the population, generated by multi-objective regularized negative correlation learning algorithm, to produce effec- tive and efficient ensembles by balancing the diversity, regularization and accuracy.