Large Language Models in medicine – researchers show how AI can improve healthcare in the future
CONFERENCE TALK
Generative AI in healthcare:
Dr. Isabella Wiest, researcher in the team of Professor Jakob N. Kather and assistant physician at the University Medical Centre Mannheim, spoke at the annual conference of the Deutsche Gesellschaft für Innere Medizin e. V. (German Society for Internal Medicine) from 13 to 16 April 2024 in Wiesbaden. In her talk, she explained the great potential of generative AI and Large Language Models (LLM) in medicine as well as current limitations and open challenges.
AI applications and technologies such as retrieval augmented generation can make work easier and more efficient for the physicians, leaving more time for patients and less time for documentation.
Dr. Isabella Wiest is researcher in the Clinical Artificial Intelligence group of Professor Jakob N. Kather and assistant physician at the University Medical Centre Mannheim.
The patient journey generates great amounts of health-related data such as medical reports, radiology and histopathology images, as well as genomic and sensor data. These data can be analyzed by AI for solving clinically relevant problems. However, around 80% of this data is unstructured and often contains important additional information in free text, which has to be extracted manually. This laborious task could also be performed by LLMs, as has recently been shown in a pre-print publication of the group. At the same time, sensitive patient data needs to be protected. To overcome the problem of transferring these data to external servers when working with OpenAI’s ChatGPT, local models such as Llama-models offer a solution. Analyses from the group show that such models can deliver very good results, for example when analyzing endoscopy findings or admission notes.
Studies suggest that generative AI can also be helpful in summarizing clinical texts, with the generated texts having fewer errors than those of human experts, Wiest explains. One promising approach to overcome the problem of so-called hallucinations in generative tasks is ‘retrieval augmented generation’ (RAG), which uses reliable knowledge bases, for example medical guidelines, to generate informed responses which can support . This has recently been demonstrated in a research paper published in NEJM AI. The digitalization of health care data and implementation of LLMs could improve clinical processes and support documentation, diagnosis and treatment. However, it is important to carefully validate the AI models and ensure traceable and explainable results. Researchers are also working on a ‘generalist’ AI that can integrate different, multi-modal data types to provide a holistic view on medical issues.
Finally, Wiest emphasizes the need for interoperability of digital systems to enable the integration of AI into existing healthcare infrastructures as well as robust evaluation of LLMs prior to implementation. Despite the great potential, there are still some challenges ahead before LLMs can be integrated into everyday clinical practice.
Further reading (in German): https://healthcare-in-europe.com/de/news/medizinische-ki-dea-ex-machina.html
More News
A Busy Autumn for the HybridEcho Team
Symposium on Large Language Models in Medicine