A method for assessing the degree of confidence in the self-explanations of GPT models
A.N. Lukyanov, A.M. Tramova
Upload the full text
Abstract. With the rapid growth in the use of generative neural network models for practical tasks, the problem of explaining their decisions is becoming increasingly acute. As neural network-based solutions are being introduced into medical practice, government administration, and defense, the demands for interpretability of such systems will undoubtedly increase. In this study, we aim to propose a method for verifying the reliability of self-explanations provided by models post factum by comparing the attention distribution of the model during the generation of the response and its explanation.The authors propose and develop methods for numerical evaluation of answers reliability provided by generative pre-trained transformers. It is proposed to use the Kullback-Leibler divergence over the attention distributions of the model during the issuance of the response and the subsequent explanation. Additionally, it is proposed to compute the ratio of the model’s attention between the original query and the generated explanation to understand how much the self-explanation was influenced by its own response. An algorithm for recursively computing the model’s attention across the generation steps is proposed to obtain these values.The study demonstrated the effectiveness of the proposed methods, identifying metric values corresponding to correct and incorrect explanations and responses.We analyzed the currently existing methods for determining the reliability of generative model responses, noting that the overwhelming majority of them are challenging for an ordinary user to interpret. In this regard, we proposed our own methods, testing them on the most widely used generative models available at the time of writing. As a result, we obtained typical values for the proposed metrics, an algorithm for their computation, and visualization.
Keywords: neural networks, metrics, language models, interpretability, GPT, LLM, XAI
For citation. Lukyanov A.N., Tramova A.M. A method for assessing the degree of confidence in the self-explanations of GPT models. News of the Kabardino-Balkarian Scientific Center of RAS.2024. Vol. 26. No. 4. Pp. 54–61. DOI: 10.35330/1991-6639-2024-26-4-54-61
References
- Vaswani A., Shazeer N., Parmar N. et al. Attention is all you need. Advances in neural information processing systems. 2017. No. 3. URL: https://arxiv.org/abs/1706.03762
- Dosovitskiy A., Beyer L., Kolesnikov A. et al. An image is worth 16×16 words: Transformers for image recognition at scale. In International Conference on Learning
Representations, 2020. URL: https://arxiv.org/abs/2010.11929 - Selvaraju R.R., Cogswell M., Das A. et al. Grad-CAM: Visual explanations from deep networks via gradient-based localization. URL: https://arxiv.org/abs/1610.02391
- Ribeiro M.T., Singh S., Guestrin C. “Why should I trust you?”: Explaining the Predictions of Any Classifier. URL: https://arxiv.org/abs/1602.04938
- Lundberg S., Lee S. A unified approach to interpreting model predictions. URL: https://arxiv.org/abs/1705.07874
- Jesse Vig. Visualizing attention in transformer-based language representation models. URL: https://arxiv.org/abs/1904.02679
- Bereska L., Gavves E. Mechanistic interpretability for AI Safety – A review. URL: https://arxiv.org/abs/2404.14082
- Lewis P., Perez E., Piktus A. et al. Retrieval-augmented generation for knowledgeintensive NLP tasks. URL: https://arxiv.org/abs/2005.11401
- Wei J., Wang X., Schuurmans D. et al. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. URL: https://arxiv.org/abs/2201.11903
- Pfau J., Merrill W., Bowman S.R. Let’s think dot by dot: Hidden computation in transformer language models. URL: https://arxiv.org/abs/2404.15758
- Abnar S., Zuidema W. Quantifying attention flow in transformers. URL: https://arxiv.org/abs/2005.00928
- Touvron H., Lavril T., Izacard G. et al. LLaMA: Open and efficient foundation language models. URL: https://arxiv.org/abs/2302.13971
- Jiang A.Q., Sablayrolles A., Mensch A. et al. Mistral 7B. URL: https://arxiv.org/abs/2310.06825
- Tunstall L., Beeching E., Lambert N. et al. Zephyr: Direct distillation of LM alignment. URL: https://arxiv.org/abs/2310.16944
- Gu A., Dao T. Mamba: Linear-Time Sequence Modeling with Selective State Spaces. URL: https://arxiv.org/abs/2312.00752
- Ali A., Zimerman I., Wolf L. The Hidden Attention of Mamba Models. URL: https://arxiv.org/abs/2403.01590
Information about the authors
Andrey N. Lukyanov, Student, Research Assistant, Center for Advanced Studies in Artificial Intelligence, Plekhanov Russian University of Economics,
117997, Russia, Moscow, 36 Stremyanny Lane;
andreylukianovai@gmail.com
Aziza M. Tramova, Doctor of Economic Sciences, Professor, Professor of the Department of Informatics, Plekhanov Russian University of Economics;
117997, Russia, Moscow, 36 Stremyanny Lane;
Tramova.AM@rea.ru, ORCID: https://orcid.org/0000-0002-4089-6580, SPIN-code: 8583-3592










