As obras disponibilizadas nesta Biblioteca Digital foram publicadas sob expressa autorização dos respectivos autores, em conformidade com a Lei 9610/98.
A consulta aos textos, permitida por seus respectivos autores, é livre, bem como a impressão de trechos ou de um exemplar completo exclusivamente para uso próprio. Não são permitidas a impressão e a reprodução de obras completas com qualquer outra finalidade que não o uso próprio de quem imprime.
A reprodução de pequenos trechos, na forma de citações em trabalhos de terceiros que não o próprio autor do texto consultado,é permitida, na medida justificada para a compreeensão da citação e mediante a informação, junto à citação, do nome do autor do texto original, bem como da fonte da pesquisa.
A violação de direitos autorais é passível de sanções civis e penais.
This dissertation presents a new proposal of neurofuzzy
systems (models), which present, in addition to the
learning capacity (which are common to the neural networks
and neurofuzzy systems) the following features: learning
of the structure; the use of recursive partitioning; a
greater number of inputs than usually allowed in
neurofuzzy systems; and hierarchical rules. The
structure´s definition is needed when implementing a
certain model. In the neural network case, for example,
one must, first of all, estabilish its structure (number
of layers and number of neurons per layers) before any
test is performed. So, an important feature for any model
is the existence of an automatic learning method for
creating its structure. A system that allows a larger
number of inputs is also important, in order to extend the
range of possible applications. The hierarchical rules
feature results from the structure learning method
developed for these two models.
The work has involved three main parts: study of the
existing neurofuzzy systems and of the most commom methods
to adjust its parameters; definition and implementation of
two hierarchical neurofuzzy models; and case studies.
The study of neurofuzzy systems (NFS) was accomplished by
creating a survey on this area, including advantages,
drawbacks and the main features of NFS. A taxonomy about
NFS was then proposed, taking into account the neural and
fuzzy features of the existing systems. This study pointed
out the limitations of neurofuzzy systems, mainly their
poor capability of creating its own structure and the
reduced number of allowed inputs.
The study of the methods for parameter adjustment has
focused on the following algorithms: Least Square
estimator (LSE) and its solutions by numerical iterative
methods; and the basic gradient descent method and its
offsprings such as Backpropagation and Rprop (Resilient
The definition of two new neurofuzzy models was
accomplished by considering desirable features and
limitations of the existing NFS. It was observed that the
partitioning formats and rule basis of the NFS have great
influence on its performance and limitations. Thus, the
decision to use a new partitioning method to remove or
reduce the existing limitations - the recursive
partitioning. The Quadtree and BSP partitioning were then
adopted, generating the so called Quadree Hierarchical
Neurofuzzy model (NFHQ) and the BSP hierarchical
Neurofuzzy model (NFHB). By using these kind os
partitioning a new class of NFS was obtained allowing the
learning of the structure in addition to parameter
learning. This Feature represents a great differential in
relation to the traditional NFS, besides overcoming the
limitation in the number of allowed inputs.
In the case studies, the two neurofuzzy models were tested
in 16 differents cases, such as traditional benchmarks and
problems with a greater number of inputs. Among the cases
studied are: the IRIS DATA set; the two spirals problem;
the forecasting of Mackey-Glass chaotic time series; some
diagnosis and classifications problems, found in papers
about machine learning; and a real application involving
load forecasting. The implementation of the two new
neurofuzzy models was carried out using a 32 bit Pascal
compiler for PC microcomputers using DOS or Linux
The tests have shown that: these new models are able to
adjust well any data sets; they create its own struture;
they adjust its parameters, presenting a good
generalization performance; and automatically extract the
fuzzy rules. Beyond that, applications with a greater
number of inputs for these neurofuzzy models. In short two
neurofuzzy models were developed with the capability of
structure learning, in addition to parameter learning.
Moreover, these new models have good interpretability
through hierarchical fuzzy rules. They are not black coxes
as the neural networks.