new publication

Cancer treatment:

Experts propose specific and suited guidelines for the use and regulation of artificial intelligence

AI models are set to transform cancer care in the future by providing personalized diagnosis and treatment options. The emergence of Generalist Medical Artificial Intelligence (GMAI) models poses a significant challenge to current regulatory frameworks. In a commentary published in the journal Nature Reviews Cancer, Stephen Gilbert and Jakob N. Kather, both professors at the EKFZ for Digital Health at TU Dresden, discuss how the regulation of these models could be handled in the future. Policy-makers will have to decide whether to radically adapt current frameworks, block generalist approaches, or force them onto narrow tracks.

Gilbert, S., Kather, J.N. Guardrails for the use of generalist AI in cancer care. Nat Rev Cancer (2024). https://doi.org/10.1038/s41568-024-00685-8

blond man with brown glasses wearing a blue shirt , blurry green background - portrait
Prof. Dr.

Stephen Gilbert

New technologies sometimes call for new regulatory paradigms.

Current Artificial Intelligence (AI) models for cancer treatment are trained and approved only for specific intended purposes. GMAI models, in contrast, can handle a wide range of medical data including different types of images and text. For example, for a patient with colorectal cancer, a single GMAI model could interpret endoscopy videos, pathology slides and electronic health record (EHR) data. Hence, such multi-purpose or generalist models represent a paradigm shift away from narrow AI models.

Regulatory bodies face a dilemma in adapting to these new models because current regulations are designed for applications with a defined and fixed purpose, specific set of clinical indications and target population. Adaptation or extension after approval is not possible without going through quality management and regulatory, administrative processes again. GMAI models, with their adaptability and predictive potential even without specific training examples – so called zero shot reasoning – therefore pose challenges for validation and reliability assessment. Currently, they are excluded by all international frameworks.

The authors point out that existing regulatory frameworks are not well suited to handle GMAI models due to their characteristics. “If these regulations remain unchanged, a possible solution could be hybrid approaches. GMAIs could be approved as medical devices and then the range of allowed clinical prompts could be restricted,” says Prof. Stephen Gilbert, Professor of Medical Device Regulatory Science at TU Dresden. “But this approach is to force models with potential to intelligential address new questions and multimodal data onto narrow tracks through rules written when these technologies were not anticipated. Specific decisions should be made on how to proceed with these technologies and not to exclude their ability to address questions they were not specifically designed for. New technologies sometimes call for new regulatory paradigms,” says Prof. Gilbert.

The researchers argue that it will be impossible to prevent patients and medical experts from using generic models or unapproved medical decision support systems. Therefore, it would be crucial to maintain the central role of physicians and enable them as empowered information interpreters.

In conclusion, the researchers propose a flexible regulatory approach that accommodates the unique characteristics of GMAI models while ensuring patient safety and supporting physician decision-making. They point out that a rigid regulatory framework could hinder progress in AI-driven healthcare, and call for a nuanced approach that balances innovation with patient welfare.

Share this Post

More News

Successful Clinical AI Day highlighted Dresden as AI Hotspot
AI and medicine: A powerful alliance – discussed at the 3rd Saxon AI Congress
Skip to content