Welcome to Francis Academic Press

Academic Journal of Medicine & Health Sciences, 2024, 5(9); doi: 10.25236/AJMHS.2024.050907.

Research on the Strategy of MedKGGPT Model in Improving the Interpretability and Security of Large Language Models in the Medical Field

Author(s)

Jinzhu Yang

Corresponding Author:
Jinzhu Yang
Affiliation(s)

AI Research, Dyania Health Inc, Jersey City, New Jersey, 07310, United States

Abstract

With the deep application of artificial intelligence technology in the medical field, especially the widespread deployment of large-scale language models, improving their interpretability and security has become an urgent and important issue to be solved. This article focuses on medical diagnostic tasks and proposes an innovative MedKGGPT model strategy aimed at significantly enhancing the interpretability and security of medical decisions by integrating machine learning and knowledge reasoning methods. The MedKGGPT model achieves a deep integration of medical expertise and data-driven learning through a carefully designed architecture, including knowledge inference modules, machine learning modules, and intelligent fusion modules. This model first constructs a detailed ontology knowledge and business rule library for medical diagnosis, providing a solid theoretical foundation for subsequent reasoning and decision-making. Subsequently, the machine learning module learns and extracts key features from massive medical data, while the knowledge reasoning module utilizes expert knowledge to analyze and validate these features. The intelligent fusion module is responsible for efficiently integrating the two results, constructing a decision evidence chain through credibility evaluation, ensuring that the model maintains high classification accuracy while also having a high degree of interpretability in its decision-making process. In order to verify the effectiveness of the MedKGGPT model, this paper selected cervical cancer cell recognition based on liquid based cytology examination images as an experimental case. The experimental results show that the model not only performs well in classification accuracy, but also continuously optimizes performance during the iteration process. More importantly, it can provide clear and explicit explanatory paths for each medical decision, greatly enhancing the trust of medical personnel in artificial intelligence assisted diagnosis. The contribution of this article is not only in proposing a novel MedKGGPT model strategy, enriching the interpretability theory of artificial intelligence models in the medical field, but also providing practical and feasible solutions for improving the security of medical decision-making and promoting trust between doctors and patients. This research achievement has profound significance for promoting the healthy development of artificial intelligence technology in the medical field.

Keywords

MedKGGPT Model, Medical Field, Interpretability, Security, Integration of Machine Learning and Knowledge Reasoning

Cite This Paper

Jinzhu Yang. Research on the Strategy of MedKGGPT Model in Improving the Interpretability and Security of Large Language Models in the Medical Field. Academic Journal of Medicine & Health Sciences (2024), Vol. 5, Issue 9: 40-45. https://doi.org/10.25236/AJMHS.2024.050907.

References

[1] Cao, Y, Cao, P, Chen, H, Kochendorfer, K. M, Trotter, A. B, Galanter, W. L,...& Iyer, R. K. (2022). Predicting ICU admissions for hospitalized COVID-19 patients with a factor graph-based model. In Multimodal AI in healthcare: A paradigm shift in health intelligence (pp. 245-256). Cham: Springer International Publishing. 

[2] Giménez, Maite, Fabregat-Hernández, Ares, Fabra-Boluda, Raül, et al. A Fine-Grained Study of Interpretability of Convolutional Neural Networks for Text Classification. International Conference on Hybrid Artificial Intelligence Systems. Springer, Cham, 2022. DOI:10. 1007/978-3-031-15471-3_23.

[3] Al-Aswadi F N, Chan H, Hoon G K, et al. Enhancing relevant concepts extraction for ontology learning using domain time relevance. Inf. Process. Manag. 2023, 60: 103140. DOI: 10. 1016/j. ipm. 2022. 103140. 

[4] De Abreu Araújo, Ivo, Hidaka Torres R, Neto N C S. A Review ofFramework forMachine Learning Interpretability. International Conference on Human-Computer Interaction. Springer, Cham, 2022. DOI: 10. 1007/978-3-031-05457-0_21. 

[5] Chen, H, Yang, Y, & Shao, C. (2021). Multi-task learning for data-efficient spatiotemporal modeling of tool surface progression in ultrasonic metal welding. Journal of Manufacturing Systems, 58, 306-315. 

[6] Xiao H, Yang P, Gao X W M. Basic uncertainty information hesitant fuzzy multi-attribute decision-making method with credibility. Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology, 2023, 45 (5): 8429-8440. 

[7] Lykov, Artem, and D. Tsetserukou. "LLM-BRAIn: AI-driven Fast Generation of Robot Behaviour Tree based on Large Language Model." ArXiv abs/2305. 19352 (2023). 

[8] Li F, Wang S. Secure Watermark for Deep Neural Networks with Multi-task Learning. 2021. DOI: 10. 48550/arXiv. 2103. 10021.  

[9] Wu X, Duan J, Pan Y, et al. Medical Knowledge Graph: Data Sources, Construction, Reasoning, and Applications. Big Data Mining and Analytics, 2023, 6 (2): 201-217. DOI: 10. 26599/BDMA. 2022. 9020021.  

[10] Cadario R, Longoni C, Morewedge C K. Understanding, explaining, and utilizing medical artificial intelligence. Nature Human Behaviour, 2021. DOI: 10. 1038/s41562-021-01146-0. 

[11] Cao Y, Cao P, Chen H, Kochendorfer K. M, Trotter A. B, Galanter W. L, & Iyer R. K. Predicting ICU admissions for hospitalized COVID-19 patients with a factor graph-based model. In Multimodal AI in healthcare: A paradigm shift in health intelligence. Cham: Springer International Publishing. 2022, 245-256