Título: | PRODUCING AND EVALUATING VISUAL REPRESENTATIONS TOWARD EFFECTIVE EXPLAINABLE ARTIFICIAL INTELLIGENCE | ||||||||||||
Autor: |
BIANCA MOREIRA CUNHA |
||||||||||||
Colaborador(es): |
SIMONE DINIZ JUNQUEIRA BARBOSA - Orientador |
||||||||||||
Catalogação: | 23/JUL/2025 | Língua(s): | ENGLISH - UNITED STATES |
||||||||||
Tipo: | TEXT | Subtipo: | THESIS | ||||||||||
Notas: |
[pt] Todos os dados constantes dos documentos são de inteira responsabilidade de seus autores. Os dados utilizados nas descrições dos documentos estão em conformidade com os sistemas da administração da PUC-Rio. [en] All data contained in the documents are the sole responsibility of the authors. The data used in the descriptions of the documents are in conformity with the systems of the administration of PUC-Rio. |
||||||||||||
Referência(s): |
[pt] https://www.maxwell.vrac.puc-rio.br/projetosEspeciais/ETDs/consultas/conteudo.php?strSecao=resultado&nrSeq=71825&idi=1 [en] https://www.maxwell.vrac.puc-rio.br/projetosEspeciais/ETDs/consultas/conteudo.php?strSecao=resultado&nrSeq=71825&idi=2 |
||||||||||||
DOI: | https://doi.org/10.17771/PUCRio.acad.71825 | ||||||||||||
Resumo: | |||||||||||||
The employment of Machine Learning (ML) models across diverse domains has
grown exponentially in recent years. These models undertake critical
tasks spanning medical diagnoses, criminal sentencing, and loan approvals. To
enable users to grasp the rationale behind predictions and engender trust, these
models should be interpretable. Equally vital is the capability of developers to
pinpoint and rectify any erroneous behaviors. In this context emerges the field
of Explainable Artificial Intelligence (XAI), which aims to develop methods to
make ML models more interpretable while maintaining their performance level.
Various methods have been proposed, many leveraging visual explanations to
elucidate model behavior. However, a notable gap remains: a lack of rigorous
assessment regarding the effectiveness of these explanations in enhancing i
nterpretability. Previous findings showed that the visualizations presented by
these methods can be confusing even for users who have a mathematical
background and that there is a need for XAI researchers to work collaboratively
with Information Visualization experts to develop these visualizations, as well
as test the visualizations with users of various backgrounds. One of the most
used XAI methods recently is the SHAP method, whose visual representations
have not had their efficacy assessed before. Therefore, we developed a study
where we worked together with visualization researchers and developed visualizations
based on the information that the SHAP method provides, having
in mind factors that are considered in literature to engender effectiveness to
an explanation. We evaluated these visualizations with people from various
backgrounds in order to assess if the visualizations are efficient in improving
their understanding of the model. With the results of this study we propose
an approach to produce and evaluate visual representations of explanations
targeting their effectiveness.
|
|||||||||||||
|