Generative AI use cases in healthcare
-
Facilitating medical training and simulation
-
Assisting in clinical diagnosis
-
Contributing to drug development
-
Automating administrative tasks
-
Generating synthetic medical data
Check out our blog for more Gen AI use cases for business.
Facilitating medical training and simulations
Generative AI in healthcare can come up with realistic simulations replicating a large variety of health conditions, allowing medical students and professionals to practice in a risk-free, controlled environment. AI can generate patient models with different diseases or help simulate a surgery or another medical procedure.
Traditional training involves pre-programmed scenarios, which are restrictive. AI, on the other hand, can quickly generate patient cases and adapt in real time responding to the decisions the trainees make. This creates a more challenging and authentic learning experience.
Real-life example
The University of Michigan built a generative AI in healthcare model that can produce various scenarios for simulating sepsis treatment.
The University of Pennsylvania deployed a generative AI model to simulate the spread of COVID-19 and test different interventions. This helped the researchers evaluate the potential impact of social distancing and vaccination on the virus.
Assisting in clinical diagnosis
Here is how generative AI for healthcare can contribute to diagnostics:
-
Generating high-quality medical images. Hospitals can employ generative AI tools to enhance the traditional AI’s diagnostic abilities. This technology can convert poor-quality scans into high-resolution medical images with great details, apply anomaly detection AI algorithms, and present the results to radiologists.
-
Diagnosing diseases. Researchers can train generative AI models on medical images, lab tests, and other patient data to detect and diagnose early onsets of different health conditions. These algorithms can spot skin cancer, lung cancer, hidden fractures, early signs of Alzheimer’s, diabetic retinopathy, and more. Additionally, AI models can unveil biomarkers that can cause particular disorders and predict disease progression.
-
Answering medical questions. Diagnosticians can turn to generative AI in healthcare if they have questions instead of looking for an answer in medical books. AI algorithms can process large amounts of data and generate answers fast, saving doctors’ precious time.
Real-life examples
A team of researchers experimented with Generative Adversarial Network (GAN) models to extract and enhance features in low-quality medical scans, transforming them into high-resolution images. This approach was tested on brain MRI scans, dermoscopy, retinal fundoscopy, and cardiac ultrasound images, displaying a superior accuracy rate in anomaly detection after image enhancement.
In another example, Google’s AI-powered Med-Palm 2 was trained on the MedQA dataset and achieved an 85% accuracy rate while answering relevant medical questions. Google admits that the algorithm still needs improvement, but it’s a strong start for generative AI as a diagnostics assistant.
Contributing to drug development
According to the Congressional Budget Office, the process of new drug development costs on average $1 billion to $2 billion, which also includes failed drugs. Fortunately, there is evidence that AI has the potential to cut the time needed to design and screen new drugs almost by half, saving the pharma industry around $26 billion in annual expenses in the process. Additionally, this technology can reduce costs associated with clinical trials by $28 billion per year.
Pharmaceutical companies can deploy generative AI in healthcare to speed up drug discovery by:
-
Designing and generating new molecules with desired properties that researchers can later evaluate in lab settings
-
Predicting properties of novel drug candidates and proteins
-
Generating virtual compounds with high binding affinity to the target that can be tested in computer simulations to reduce costs
-
Forecasting side effects of novel drugs by analyzing their molecular structure
You can find more information on the role of AI in drug discovery and how it facilitates clinical trials on our blog.
Real-life examples
The rise of strategic partnerships between biotech companies and AI startups is an early sign of generative AI taking over the pharmaceutical industry.
Just recently, Recursion Pharmaceuticals acquired two Canadian AI startups for $88 million. One of them, Valence, is known for its generative AI capabilities and will work on designing drug candidates based on small and noisy datasets that are not sufficient for traditional drug discovery methods.
Another interesting example comes from the University of Toronto. A research team built a generative AI system, ProteinSGM, that can generate novel realistic proteins after studying imagery representations of existing protein structures. This tool can produce proteins at a high rate, and then another AI model, OmegaFold, is deployed to evaluate the resulting proteins’ potential. Researchers reported that most of the novel generated sequences fold into real protein structures.
Automating administrative tasks
This is one of the most prominent generative AI use cases in healthcare. Studies show that burnout rate among physicians in the US has reached a whopping 62%. Doctors suffering from this condition are more likely to be involved in incidents endangering their patients and are more inclined to alcohol abuse and having suicidal thoughts.
Fortunately, generative AI in healthcare can partially alleviate the burden off the doctors’ shoulders by streamlining administrative tasks. It can simultaneously reduce costs associated with administration, which, according to HealthAffairs, accounts for 15%-30% of overall healthcare spending. Here is what generative AI can do:
-
Extract data from patients’ medical records and populate the corresponding health registries. Microsoft is planning to integrate generative AI into Epic’s EHR. This tool will perform various administrative tasks, such as replying to patient messages.
-
Transcribe and summarize patient consultations, fill this information into the corresponding EHR fields, and produce clinical documentation. Microsoft’s Nuance integrated generative AI tech GPT-4 into its clinical transcription software. Doctors can already test the beta version.
-
Generate structured health reports by analyzing patient information, such as medical history, lab results, scans, etc.
-
Produce treatment recommendations
-
Answer doctors’ queries
-
Find optimal time slots for appointment scheduling based on patients’ needs and doctors’ availability
-
Generate personalized appointment reminders and follow-up emails
-
Review medical insurance claims and predict which ones are likely to be rejected
-
Compose surveys to gather patient feedback on different procedures and visits, analyze it, and produce actionable insights to improve care delivery
Real-life example
Navina, a medical AI startup, built a generative AI assistant that helps doctors tackle administrative duties more efficiently. This tool can access patient data, including EHRs, insurance claims, and scanned documents, give status updates, recommend care options, and answer doctors’ questions. It can even generate structured documents, such as referral letters and progress notes.
Navina has already scored $44 million in funding, which indicates a strong interest from the medical community.
Generating synthetic medical data
Medical research relies on accessing vast amounts of data on different health conditions. This data is painfully lacking, especially when it comes to rare diseases. Also, such data is expensive to gather, and its usage and sharing are governed by privacy laws.
Generative AI in medicine can produce synthetic data samples that can augment real-life health datasets and are not subject to privacy regulations, as the healthcare data doesn’t belong to particular individuals. Artificial intelligence can generate EHR data, scans, etc.
Real-life examples
A team of German researchers built an AI-powered model, GANerAid, to generate synthetic patient data for clinical trials. This model is based on the GAN approach and can produce medical data with the desired properties even if the original training dataset was limited in size.
Another team of scientists experimented with generative AI to synthesize electronic health records. The researchers were motivated by restrictive data privacy regulations and the inability to effectively share patient data between hospitals. They built the EHR-M-GAN model that could derive heterogeneous, mixed-type EHR data (meaning it contains both continuous and discrete values) that realistically represents patient trajectories.
Ethical considerations and challenges of generative AI in healthcare
Even though tech and consulting giants continue to invest in AI, we can also see how prominent AI experts, including Tesla CEO Elon Musk and OpenAI CEO Sam Altman, warn of the risks associated with the technology. So, which challenges does generative AI bring to healthcare?
-
Bias. AI models’ performance is as good as the dataset they were trained on. If the data does not fairly represent the target population, this will leave room for bias against less represented groups. As generative AI tools train on vast amounts of patient records data, they will inherit any bias present there, and it will be a challenge to detect, let alone eradicate it.
-
Lack of regulations. Even though AI presents considerable ethical concerns, there are no official regulations yet to govern the use of this technology. The US and the EU are working towards formalizing relevant policies, but this won’t take place in the near future.
-
Accuracy concerns. AI does make mistakes, and in healthcare, the price of such mistakes is rather high. For instance, large language models (LLMs) can hallucinate. Meaning they can produce syntactically probable outcomes that are factually incorrect. Healthcare organizations will need to decide when to tolerate errors and when to require the AI model to explain its conclusions. For instance, if generative AI is used to assist in cancer diagnosis, doctors are unlikely to adopt such a tool if it can’t justify its recommendations.
-
Accountability. Who is responsible for the final health outcome? Is it the doctor, the AI vendor, the AI developers, or yet another party? Lack of accountability can have a negative impact on motivation and performance.
Ready to enhance your healthcare practice with generative AI?
Generative AI algorithms are becoming increasingly powerful. Robert Pearl, a clinical professor at Stanford University School of Medicine, said:
“ChatGPT is doubling in power every six months to a year. In five years, it’s going to be 30 times more powerful than it is today. In 10 years, it will be 1,000 times more powerful. What exists today is like a toy. In next-generation tools, it’s estimated there will be a trillion parameters, which is interestingly the approximate number of connections in the human brain.”
AI can be a powerful ally, but if misused, it can cause significant damage. Healthcare organizations need to approach this technology with caution. If you are considering deploying AI-based solutions for healthcare, here are three tips to get you started:
-
Prepare your data. Even if you decide to opt for a pre-trained, ready-made AI model, you might still want to retrain it on your proprietary dataset, which needs to be of high quality and representative of the target population. Keep medical data secure at all times and safeguard patient privacy. It would be useful to disclose which dataset an algorithm was trained on as it helps to understand where it will perform well and where it might fail.
-
Take control of your AI models. Cultivate the concept of responsible AI in your organization. Make sure people know when and how to use the tools and who assumes responsibility for the final outcome. Test the generative AI models on use cases with limited impact before scaling to more sensitive applications. As mentioned earlier, generative AI can make mistakes. Decide where a small failure rate is acceptable and where you can’t afford it. For instance, 98% accuracy can suffice in administrative applications, but it’s unacceptable in diagnostics and patient-facing practices. Devise a framework that will govern the use of generative AI in healthcare at your hospital.
-
Help your employees accept the technology and use it. AI still needs human guidance, especially in the heavily regulated healthcare sector. Human-in-the-loop remains an essential ingredient for the technology to succeed. The medical and administrative staff will be expected to supervise AI models, so hospitals need to focus on training people for this task. Employees, on the other hand, should be able to reinvent their daily routine, now that AI is a part of it, to use the freed-up time to produce value.