ABSTRACT
With the advent of large language models (LLMs), the artificial intelligence revolution in medicine and radiology is now more tangible than ever. Every day, an increasingly large number of articles are published that utilize LLMs in radiology. To adopt and safely implement this new technology in the field, radiologists should be familiar with its key concepts, understand at least the technical basics, and be aware of the potential risks and ethical considerations that come with it. In this review article, the authors provide an overview of the LLMs that might be relevant to the radiology community and include a brief discussion of their short history, technical basics, ChatGPT, prompt engineering, potential applications in medicine and radiology, advantages, disadvantages and risks, ethical and regulatory considerations, and future directions.
Main points
• A language model is a computer program for processing human language, ranging in size and complexity from small rule-based systems to sophisticated models driven by artificial intelligence (AI).
• Large language models (LLMs) are usually based on a transformer architecture with a particular attention mechanism.
• Two recent accomplishments, namely ChatGPT and GPT-4, have significantly raised the bar for the capabilities of existing AI systems.
• LLMs have proven to be successful in many tasks in radiology; however, further studies are required to investigate the feasibility of their use in medical imaging.
• Unresolved ethical and legal issues should be addressed before LLMs are implemented within radiology practice.
Radiology is one of the most technology-driven medical specialties and has always been closely linked to computer science. In particular, ever since the picture archiving and communication system (PACS) revolution, there have been many examples of emerging new technology that has shaped and reshaped the day-to-day practice of radiologists.1 More recently, the scientific community has witnessed the remarkable progress of artificial intelligence (AI), and the advances in image-recognition tasks are likely to herald another significant leap forward for radiology practice.2 There are potential applications of AI in almost the entire radiology workflow, such as image quality improvement (e.g., reducing image acquisition time and/or radiation dose), image post-processing (e.g., image annotation and segmentation), and image interpretation (e.g., prediction of diagnosis).3 With the advent of natural language processing (NLP) and especially with the development of large language models (LLMs), it is becoming clear that AI applications are not limited to imaging-related tasks in radiology, and LLMs have a potential impact in radiology, as radiologists mainly provide textual reports comprising their interpretations of diagnostic images and their clinical significance.
The origins of LLMs date back to the 1950s, a pivotal decade that witnessed the establishment of AI as an academic discipline and the successful demonstration of machine translation through the Georgetown–IBM experiment.4 Before delving into the significant milestones that have led to the remarkable technology of today, it is imperative to establish definitions and introduce key concepts. In essence, a language model is a computer program designed to process human language that varies in size and complexity from small rule-based systems to sophisticated AI-driven models. On the other hand, LLMs represent an exceptional class of language models distinguished by their scale, complexity, and emergent capabilities not found in their smaller-scale counterparts.5 These models, built on deep learning architectures and trained on vast data with billions of parameters, excel in a diverse range of NLP tasks, such as summarization, translation, sentiment analysis, and text generation. Put simply, LLMs predict the next word or token in a given sequence of words.
Among the earliest examples of language models was one of the first “chatbots,” coded in the 1960s and named ELIZA, which was based on a set of predefined rules and used pattern matching to simulate human conversation.6 Although ELIZA and the other early language models were limited in their capabilities and struggled to handle the complexity and nuances of human language, research in the field of NLP had begun, and the interest continued to grow.
The breakthrough in LLMs occurred in the 1990s with the emergence of the internet and enhanced computational capabilities, facilitating access to extensive text corpora for training datasets. Notably, the introduction of the long short-term memory (LSTM) network in 1997 can be regarded as a turning point for precursors to present-day LLMs.7 The pace of technological advancement gained further momentum, culminating in the groundbreaking publication of “Attention Is All You Need” in 2017, which introduced the transformer network architecture.8 Subsequently, in 2018, the release of the generative pre-trained transformer (GPT) and the bidirectional encoder representations from transformers (BERT) marked a turning point in the NLP landscape and ushered in the era of LLMs. From there, LLMs have continued to grow in all respects, gaining popularity within the general population as well as the medical community (Figure 1).9
This review article provides an overview of the LLMs that might be relevant to the radiology community, with a brief discussion of the technical basics, the ChatGPT revolution, prompt engineering, potential applications in medicine and radiology, the advantages, disadvantages, and risks, the ethical and regulatory considerations, and future directions. Readers are advised to first refer to Table 1 for definitions of key terms that are used extensively in this discussion of LLMs.
Technical basics of large language models
Language modeling can be technically divided into the following development stages: statistical language models,10,11,12 neural language models,13,14 and pre-trained language models (PLMs) (Figure 2).15,16 The last one is only trained once with unsupervised learning methods (i.e., they learn patterns from unlabeled data) on a massive amount of text data and can be used for a variety of tasks without being retrained from scratch.15 With capabilities of zero-shot and few-shot learning, PLMs can generalize and adapt to new tasks and data with no or minimal additional training.17,18,19 Research has shown that scaling PLMs in terms of data or model size frequently improves the performance of the model on downstream tasks.20,21,22 These large-sized PLMs then exhibit surprising behavioral differences from smaller PLMs and demonstrate emergent abilities in solving several complex tasks, such as in-context learning, instruction following, and step-by-step reasoning.5,23 These large-sized PLMs can produce the desired results through in-context learning without the need for extra training or gradient updates and provide outputs for new tasks with instructions, without providing explicit examples. Thus, the research community coined the term LLMs for these massive PLMs that can contain hundreds of billions of parameters.24,25
Key concepts in LLMs are shown and explained in Figure 3. LLMs are typically based on transformer architecture, which is highly parallelizable from a computational standpoint.26 Transformers are essentially composed of encoders and decoders, each of which has a particular attention mechanism.8 The attention mechanism is simply a dot product operation to obtain similarity scores by which it enables the model to pay more attention to some inputs than to others, regardless of their position in the input sequence, and enables the model to comprehend the context of a word better. Furthermore, in contrast to recurrent neural networks, the attention mechanism permits the model to view the entire sentence or even the entire paragraph at once, rather than one word at a time.
For a simple transformer model (Figure 4), a text input, such as a sentence or paragraph, must be tokenized (i.e., split into smaller units) for further processing (Figure 5).27,28,29 These tokens are then encoded numerically and transformed into embeddings (i.e., vector representations that maintain meaning). In addition, the order of the words in the input is positionally encoded. Using these embeddings of all tokens along with position information, the encoder within the transformer then generates a representation. The positionally encoded input representation and output embeddings are processed by the decoder so that output can be generated based on these clues (e.g., an initial input or a new word that was previously generated). During training, the decoder learns how to predict the next word based on the previous words. To accomplish this, the output sequence is shifted to the right by one position; thus, the decoder can only utilize the preceding words. After the decoder generates the output embeddings, the linear layer transforms them into the original input space by mapping them to a higher-dimensional space. Then, the softmax function is used to generate a probability distribution for each output, enabling the generation of probabilistic output tokens. This procedure is known as autoregressive generation and is repeated to produce the entire output. Notably, although LLMs are consistent, they are not deterministic but stochastic, meaning they can generate different answers for the same query.30 This is because the model returns a probability distribution over all possible tokens and draws samples from this distribution to produce the output token.
The masked multi-head attention layer is a crucial component that distinguishes the transformer model from the simple encoder–decoder architecture described above.8 The attention layer contains the weights learned during training that represent the strength of the relationship between all token pairs in the input sentence. This mechanism guarantees that each token has a direct connection to all tokens that came before it. This is a great achievement considering the gradient issues of older architectures such as recurrent neural networks and LSTM networks, specifically the difficulties in recalling previous tokens when two tokens are far apart.31,32 The attention layer is masked, such that the model can only focus on previous tokens or positions in the input sequence. This restriction ensures that the model cannot access information about future tokens, which could result in data leakage or violate the causality of the sequence (i.e., the effects of one part of a sequence on another). The transformer employs a multi-head attention layer because it contains multiple parallel attention layers.
It is important to note that LLMs can use external tools (e.g., calculators, image readers, search engines) to perform tasks that are not best expressed in the form of text (e.g., numerical computation) or to overcome the limitation of being trained on old data that prevents them from capturing current or external information.33 Furthermore, LLMs can also be used within external tools or applications (e.g., LangChain), which can significantly expand the capabilities of LLMs.
ChatGPT revolution and basics
At the time of writing, the latest text generation tools released by OpenAI are GPT-3.5, GPT-4, and ChatGPT. All these tools are based on the transformer architecture, as the acronym, GPT, indicates. Considering all previous efforts in LLMs, ChatGPT and GPT-4 are two notable accomplishments that have significantly raised the bar for the capabilities of existing AI systems.34 The GPT-3.5 model is a fine-tuned version of the GPT-3 model and was trained as a completion-style model, meaning it can generate relevant words that follow the input words. On the other hand, GPT-4 has an entirely new large multimodal model and is also adjusted with reinforcement learning with human feedback (RLHF) to better align with human expectations.34 Extending text input to multimodal signals is regarded as a significant development. Overall, GPT-4 is superior to GPT-3.5 in its ability to solve complex tasks, as evidenced by a significant performance increase on various evaluation tasks.35 ChatGPT, based on GPT-3.5 and GPT-4, was optimized for creating conversational responses (i.e., as a conversation-style model) and further fine-tuned using RLHF,36 allowing it to provide human-like responses to user queries or questions. With RLHF, the outputs were ranked by humans, and a reward system was used to improve the model to align the output model based on human expectations, which might be critical for their success, sparking the interest of the AI community ever since its debut because of its exceptional potential for human communication. The implementation of ChatGPT in conversational-style interactions opens up a universe of opportunities for human–computer interaction. Its capacity to comprehend context, create logical responses, and maintain conversational flow makes it a viable tool for a vast array of domains and use cases, such as customer support, brainstorming, content generation, and tutoring. Furthermore, ChatGPT now supports the plugin mechanism, which expands its compatibility with existing tools and applications.33
Despite the tremendous progress, there remain limitations with these superior LLMs, such as producing “hallucinations” (i.e., fabrication of facts), factual errors, potentially risky responses in certain contexts, variable source reporting, or changing behaviors or drifts.34,37,38,39,40,41 Due to these limitations, they should be used cautiously. The risks related to LLM use are extensively discussed later in this review.
These models could also be used in coding environments and as part of other applications via application programming interfaces (API). Currently, the main issues are token limits and the high usage fees for ChatGPT and various GPT APIs.
Prompt engineering
In the context of LLMs, a prompt is an input provided to the model to steer its output. These prompts are often sequences constructed from natural language but can also be other types of structured information. The prompt’s syntax (e.g., structure, length, ordering) and semantic contents (e.g., words, tone) have a significant impact on the outputs of LLMs.42 This poses a challenge, as even slight modifications can lead to substantially different results (“prompt brittleness”).43
Prompt engineering is an emerging field of research that attempts to design prompts that steer LLMs toward a desired output. In contrast to other methods (e.g., pre-training, fine-tuning), this way of influencing the outputs does not involve updating the weights of LLMs, thus leaving the underlying model unchanged. The currently limited theoretical understanding of why some prompts work better than others makes it challenging to design effective prompts empirically. Therefore, “prompt engineers” often have to resort to extensive experimental work for specific use cases.
A multitude of prompting techniques have been developed (Table 2).43,44 The most basic prompts provide a task text that should be followed by an answer, without giving more context or examples (i.e., zero-shot prompting). In-context learning (often an example of few-shot prompting) refers to providing examples of desired input–output pairs in the input prompt (e.g., questions and corresponding answers from the training data) together with a new question that the LLM should respond to following the provided examples. Instruction following requires an LLM that was fine-tuned in a supervised way to follow instructions (e.g., ChatGPT). These types of LLMs can be provided with instructions and one or more examples (similar to in-context learning). Chain-of-thought prompting refers to a strategy of breaking down a task into smaller logical subtasks, which can empirically improve the performance of LLMs.25 One simple way to steer the LLM in this direction is to provide the instruction “let’s think step-by-step.”19 The Tree-of-Thoughts framework is an example of multi-turn prompting that extends this approach by considering multiple reasoning possibilities at each step.45
Prompt engineering could also play a valuable role in radiology-specific tasks such as report structuring, summarization, or language translation (Table 3). Nevertheless, its true value requires further exploration. Initial results suggest that for tasks such as report summarization, domain adaptation through lightweight fine-tuning may outperform various in-context prompting approaches.46 A promising research direction involves enriching initial prompts with information retrieved from external sources (e.g., through API calls to other models, tools, and databases) to augment the capabilities of LLMs and increase the correctness of their outputs.33,47,48
Potential applications in medicine
The application of LLMs is expected to transform medical practice in all fields and in numerous ways. First, LLMs may potentially assist students during their medical training, by providing nonobvious and logical insights into explanations and role-modeling a deductive reasoning process.49 Second, LLMs can rapidly develop specialized knowledge for different medical disciplines and generate answers to clinical questions by analyzing large amounts of medical data, and with the possibility of fine-tuning the generated content based on the most recent published papers, the domain-specific medical literature, and on the reader’s background.50 In all medical fields, this capability of LLMs could finally translate into enhanced clinical decision support, improved patient engagement, and accelerated medical research.51,52,53
Regarding enhanced clinical decision support, LLMs are expected to improve diagnostic accuracy and the prediction of disease progression and support clinical decision-making.54 As practical examples, the use of PubMedBERT (a pre-trained model based on PubMed abstracts and full-text articles) and ClinicalBERT (a contextual language model trained on PubMed Central abstracts, full-text articles, and fine-tuned on notes from the Medical Information Mart for Intensive Care) resulted in two successful diagnoses: the automatic determination of the presence and severity of esophagitis based on the Common Terminology Criteria for Adverse Events guidelines from notes of patients treated with thoracic radiotherapy,55 and the accurate prediction of short-, mid-, and long-term mortality by only using clinical notes within the 24 hours of admission of patients admitted to intensive care units.56,57
With regard to benefits for the patient, LLMs proved to be helpful in providing correct answers to basic questions posed by patients with prostate cancer, rhinologic diseases, and cirrhosis,51,52,53 and in providing emotional support to patients and caregivers, encouraging proactive steps to manage the diagnosis and treatment strategies.53
Furthermore, LLMs may accelerate medical research by allowing for the identification of high-quality papers within all medical literature, the detection of correlations, and the provision of insights that may aid researchers in accelerating medical advancement.58,59
Moreover, the adoption of LLMs may aid or simplify certain daily tasks, such as text generation, text summarization, and text correction, which can lead to significant time savings and improvements in grammar, readability, and conciseness of written content while maintaining the overall message and context. As an example of their potential in clinical practice, LLMs could output a formal discharge summary in a matter of seconds by analyzing all clinical notes.60
Potential applications in radiology
Overall, LLMs have shown promise in several fields, including radiology. They have proven suitable for a variety of tasks, some of which have already been explored in earlier studies. For example, it has been demonstrated that these models may have a role in patient triage and workflow optimization. Specifically, they can help in the automated determination of the imaging study and protocol based on radiology request forms.61 In this context, LLMs could be integrated into radiology departments’ information technology systems to facilitate patient triage; they could help prioritize imaging studies based on urgency, patient information, and existing imaging data. This could, in turn, streamline the workflow and ensure that critical cases receive prompt attention.
Furthermore, the performance of LLMs in generating impressions from radiology reports has been evaluated. A recent study showed promising results, suggesting the feasibility of LLM use in report generation and summarization, considering coherence, comprehensiveness, factual consistency, and harmfulness.62 Another possible use case for LLMs in radiology is their assistance in diagnosis. Indeed, by analyzing the imaging data and considering the patient’s medical history, these models can suggest potential diagnoses, differential diagnoses, and possible treatment options.63 In view of this, LLMs could be utilized as AI-powered assistants for radiologists, helping them interpret medical images and providing preliminary assessments.
Moreover, they have proven valuable in answering radiology-related questions, including explanations of specific imaging findings, clarifications regarding radiological procedures, and general information about different types of imaging modalities.64,65 Radiologists, trainees, and even patients could interact with these models to obtain answers to questions related to radiology. This aspect is closely linked to the use of LLMs in the context of education and training, as a virtual tutor for radiology residents to understand complex concepts, interpret images, and provide learning resources, fostering self-directed learning and knowledge retention. As evidence of this, it is worth mentioning that, despite no radiology-specific pre-training, ChatGPT almost passed a radiology board-style examination, even when image-based questions were excluded.66
In fact, LLMs can be integrated with existing radiology software and systems to assist radiologists in various ways. For example, they can serve as a natural language interface to several radiology tools currently in use.
Radiologists can interact with the system using plain language queries, making it easier to retrieve patient data, reports, and images. By being told to “show me all the MRI reports from last week”, the LLM can retrieve and display the relevant information. Furthermore, the LLM can suggest structured report templates and help in ensuring that the report includes all necessary information. Finally, LLMs can be integrated with image analysis tools to provide radiologists with assistance in image interpretation and data extraction and structuring. The LLMs can be customized to fit the specific needs of radiology departments and integrated seamlessly with existing PACS and radiology information systems (RISs). If properly integrated with the electronic health records (EHRs) and RIS, LLMs could automatically identify the radiology reports with recommendations for additional imaging and help ensure the timely performance of clinically necessary follow-ups.
It is important to note that although LLMs can be a valuable tool in radiology, they should complement the expertise of radiologists rather than replace it. One notable issue with ChatGPT is its tendency to maintain unwavering confidence in its responses, even when providing incorrect answers. This characteristic could have adverse consequences in clinical situations.67 Ethical considerations, validation, and regulatory compliance are essential aspects to be addressed before deploying AI systems in real-world medical settings. In addition, continuous updating and improvement of the model would be necessary to maintain accuracy and relevance.
Advantages, disadvantages, and risks
There are both advantages and disadvantages of LLMs that are inherent to their structure and capabilities. However, certain aspects are applicable to all LLMs, irrespective of their architecture or application. The most important of these advantages is the fact that they possess advanced NLP capabilities. Advanced language comprehension from LLMs allows the performance of tasks such as text summarization, text translation, and question answering in a manner similar to humans.68 Text generated by an LLM is usually free of grammatical mistakes and misspellings, which is important in radiology practice. These NLP capabilities can be applied to radiological reports to convert them into structured text, translate them to other languages, and explain them in a way that is comprehensible to patients.69 Another important advantage is that their generative capacity can be used to generate code for medical imaging research. Furthermore, LLMs can be used by people with limited to no coding experience, translating research ideas into useful code.70 This code can be used to develop machine learning models for medical imaging research. Combining the NLP capabilities of LLMs and their generative capacity can also allow code debugging and application troubleshooting, enhancing research possibilities in medical image analysis. In the case of the latter, LLMs can be successfully coupled with convolutional neural networks (CNNs) to enable image recognition and the generation of relevant text based on images; CNNs can be used to extract image features, which can be subsequently used by LLMs for image recognition and relevant text generation.71
Nonetheless, despite the important advantages of LLMs, their use still has significant disadvantages. The most important disadvantage of LLM use in radiological research is related to privacy concerns. Privacy issues can emerge because sensitive patient information can be compromised when uploaded to LLMs.72 This important disadvantage can raise ethical concerns when utilizing patient data that includes radiological reports and images. Appropriate data de-identification processes need to be in place to ensure the safe use of patient data in LLMs.
Another disadvantage of LLMs is the possibility of generating information that is artificial and potentially harmful based on their logic (i.e., hallucinations), or the irrelevant repetition of information existing in their training data (i.e., “stochastic parrots”).37 When used to translate reports or to generate information that will be distributed to patients or used to assist diagnostic decisions, the user needs to be extremely careful to avoid cases where LLMs generate fake information. Such fake information can vary from an inaccurate translation of a radiological report to reaching false conclusions related to a disease or a diagnosis. This necessitates the validation of LLM-generated content, especially when used in patient care, a fact that should also be disclosed to patients when receiving such information.37
Given that LLMs can generate fake information, the interpretability and transparency of the models are extremely important. Having the ability to explain why the model has produced a certain output, to identify activated neurons and their weights (interpretability), and to decipher how the model works, how it is structured, what capabilities and what limitations it has (transparency), are of utmost importance when they are used for medical decision making, as errors can have an impact on patient care. Companies such as OpenAI have attempted to produce tools that enable the interpretability of their models, e.g., GPT-4.73 This can increase the trust of the users in the model output and allow debugging and error identification to ensure that critical errors related to patient management are not repeated.74
The quality of an LLM’s output is directly influenced by the information used for its training. To ensure accuracy in LLM responses, the quality and diversity of the training data need to be considered. Therefore, generic LLMs (including GPT-4 and Bard) that have not been trained on medical data may yield inaccurate responses to medically related tasks. On the other hand, medically oriented LLMs such as BioBERT and Med-PaLM2 have been trained on medical data, but the representation of certain information in the model is still unknown.75 Moreover, LLMs rely on temporal updates for the training data. For instance, at the time of writing this review, ChatGPT has been trained with data up to September 2021, meaning it can be less reliable when up-to-date medical information is required.69,76 With the rapidly evolving knowledge in medicine, this can represent a relevant risk for users and patient care as the LLM may not have access to the latest data and recently published guidelines.77
LLMs can be freely used by patients for self-diagnosis or to decode radiological reports. Although LLMs can simplify radiological reports with technical language to a more understandable summary for the patient, there is a risk of overconfidence, with patients not being aware of output errors and assuming that the provided answers are always correct.78,79 The risk of missing relevant information in a simplified summary should also be considered in patient care.78 The generation of different outputs from the same query can pose a risk of contradictory answers with the difficulty of selecting the correct medical information.69
Furthermore, LLMs are not capable of providing ethical insights and evaluating the ethical risks related to the use of the information. The generation of incorrect diagnoses, misinterpretation of the results, or wrong recommendations can induce the risk of medico-legal implications with dangerous information for patient management, which requires specific regulation in the near future.80
Last but not least, an important disadvantage of LLMs is the environmental and financial risk of their use. Given that the energy needed to train an LLM can be comparable with that of a trans-Atlantic flight, with energy costs reaching thousands of US dollars,38 the widespread use and training of such models require regulation.
Ethical and regulatory considerations
The recent improvements in LLM performance have also affected the potential use of this technology in healthcare and radiology in particular, with studies proposing novel applications or aimed at demonstrating its medical prowess.66,81,82 However, the actual use of LLMs in medical imaging remains controversial due to unresolved ethical and regulatory questions, partly due to inherent technical limitations.83
As with other machine learning models, especially deep learning models, LLMs are highly sensitive to bias embedded within their training data. Although some sources of bias such as age or gender distribution can be easily identified and even addressed, others, such as differences due to the sourcing of the training data, can be less apparent or solvable. For example, most text data used to train LLMs will originate from Western countries and will be written in the English language, simply due to the realities regarding the availability of materials and technology necessary to produce and collect sufficiently large datasets.84 Beyond the reduced representation of other areas of the world in this setting, and even within the countries from which this data mainly originates, a lack of fair representation of data produced by all societal components can be expected. Moreover, this imbalance cuts both ways, as the voice of the majority may drown smaller communities, but extremely vocal minorities may also end up being overrepresented within the training data. While efforts to address these issues are ongoing, physicians should be aware that human bias is an integral component of any LLM and should be accounted for rather than ignored.85 On the other hand, as more and more data available online are produced by software, from simpler automated bots to LLMs, it is also true that this will also represent a novel source of bias, with the risk of harming the training of future models by further diluting the quality of available data and reducing the models’ ability to meet human needs and expectations.86
Regulatory bodies are attempting to address these ethical issues, as well as other limitations of LLMs, such as hallucinations. Both in the United States (US) and the European Union (EU), the use of LLMs in healthcare would typically fall under the domain of current medical device regulations.87 Even if this prevents their marketing as medical devices, the reality is that LLMs are not currently prevented from answering health-related questions, and the risk of misinformation and even potential harm to patients is not absent. For the EU in particular, it should be noted that medical devices not only require preliminary certification but also continuous surveillance, which poses specific challenges to complex and somewhat unpredictable models such as LLMs.88
Future directions
As discussed in the previous sections, there is a huge variety of opportunities to apply LLMs in radiology, and new research is being published every day (Figure 1). All these results already indicate that every aspect of radiology practice will eventually be affected by these new tools.
However, the lack of regulations and ethical uncertainties mean that the rapid implementation of these tools in radiology remains unclear. Regulations should be in place to mitigate potential risks that may be associated with this new technology, and these tools will likely be regulated in the EU and the US in the same way as they are with other clinical decision support tools.87 Nevertheless, the ethical issues and their solutions could be use-case specific, which may require ongoing human oversight and is not foreseeable with the current examples.9
Some LLMs, such as GPT-4, have shown remarkable potential in various fields and have even signaled that they may carry sparks of artificial general intelligence.35 Looking ahead, we can see some important trends that are likely to shape the future of LLM applications in radiology.
Currently, radiologists must log into EHRs separately to attain more information about the medical history or lab results of patients because the EHR is a separate system from the PACS. Considering that most imaging orders are laconic and do not include a good summary of the medical history, the radiologist usually must switch back and forth between systems, which can be extremely time consuming. This process could be assisted or completely taken over by LLMs, whereby a summary of the patient’s history and findings would be presented automatically.89
Another application of LLMs is that they could serve as sophisticated clinical decision support systems in which they are fine-tuned with guidelines and recommendations, such as those of the Fleischner Society, and automatically generate evidence-based recommendations from radiology reports, such as follow-up recommendations for solid pulmonary nodules.65
Furthermore, LLMs can also play a critical role in training the next generation of radiologists.90 Currently, training can be hindered by heavy workload. Through integration into PACS, LLMs could provide a personalized, interactive, and effective learning environment, provide similar examples from the archives to the one the trainee is working on, recommend additional resources for diagnosis, or fully simulate a real clinical scenario to prepare trainees for night shifts.
Although the potential of LLMs in radiology is evident, there are various limitations and problems that must be addressed as research in this subject progresses. One of the most serious issues in using LLMs in medicine is data privacy.83,91 To address this issue, continuing research is focusing on building robust approaches for privacy-preserving machine learning.92,93,94,95 The robustness of LLMs, particularly in clinical setting, is a further concern of the utmost importance. These models must consistently and reliably perform across a broad spectrum of demographics, equipment, and scenarios. Ongoing research focuses on enhancing model generalization and minimizing biases to address this issue.96,97
Concluding remarks
Overall, LLMs have the potential to transform the field of radiology, not only in the clinical setting but also in the academic setting. Consequently, radiologists should be familiar with the inner workings and idiosyncrasies of LLMs, such as hallucinations, drifts, and their stochastic nature, as described in this review. Nonetheless, the future of LLMs in radiology appears to be very bright and has the potential to revolutionize patient care, improve outcomes, and enhance radiologists’ capabilities. However, these developments should be accompanied by regulations and ethical guidelines to ensure that these tools are used safely and responsibly without compromising patient privacy or data security. The authors hope the overview of the key concepts provided in this article will help improve the understanding of LLMs among the radiology community.