As obras disponibilizadas nesta Biblioteca Digital foram publicadas sob expressa autorização dos respectivos autores, em conformidade com a Lei 9610/98.
A consulta aos textos, permitida por seus respectivos autores, é livre, bem como a impressão de trechos ou de um exemplar completo exclusivamente para uso próprio. Não são permitidas a impressão e a reprodução de obras completas com qualquer outra finalidade que não o uso próprio de quem imprime.
A reprodução de pequenos trechos, na forma de citações em trabalhos de terceiros que não o próprio autor do texto consultado,é permitida, na medida justificada para a compreeensão da citação e mediante a informação, junto à citação, do nome do autor do texto original, bem como da fonte da pesquisa.
A violação de direitos autorais é passível de sanções civis e penais.
This dissertation investigates the Bayesianan Neural Networks, which is a new approach that merges the potencial of the artificial neural networks with the robust analytical analysis of the Bayesian Statistic.
Typically, theconventional neural networks such as backpropagation, have good performance but presents problems of convergence, when enough data for training is not available, or due to problems of local minimum, which result in long training time and overfitting. For these reasons, researchers are investigating new learning algorithm for neural networks based on principle that belong to other area of science like Statistics, Fuzzy logic, Genetic Algorithms, etc.
This dissertation studies and evaluates a new learning algorithm based on the Bayesian Statistics, that consists in the use of the Bayesian mechanical inference to calculate the value of the parameters of neural networks.
The main steps of this research are: the study of the difference between the approach of the classical statistics and the approach of the Bayesian statistics regarding the process of learning in neural networks (RNB) with Benchmarks applications; and the evaluation of RNBs with real applications.
The main differences between the classical and Bayesian statistics in regard to the learning on neural networks are in the form of calculation of the parameters. For example, the principle of maximum likelihood that belongs to classical statistics, in which the backpropagation algorithms, it is characterized for calculate only on vector of parameters of neural networks. However, the Bayesian inference, it is characterized for calculate a probabilistic density function of the parameters of neural networks are approximations or numerical methods, because the correct analytical treatment is difficult due to the high dimensions of the vector parameter. This dissertation gives especial emphasis to two methods: the Gaussian approximation and the Markov Chain Monte Carlo method (MCMC).
To evaluate the performance of these Bayesian learning algorithms, a number of test has been done in application benchmarks of time series forecasting, classification and approximation of functions. Also, have been developed real applications on time serie forecasting of electrical and face recognition. Moreover, comparations have been made between the Bayesian learning algorithms with backpropagation, neuro fuzzy systems and other statistical techniques like a Box&Jenkins and Holt-Winters.
This dissertation has shown that the advantages of the Bayesian learning algorithms are the minimization of the overfitting, control of the model complexity (principle of Occam’s razor)and good generalization with a few data for training.