Some tech experts don’t think it’s ready for prime time yet.
It can make and analyze text, images, audio, videos, and more. Generative AI is making its way into healthcare more and more, with help from both big tech companies and small startups.
Google Cloud, Google’s division for cloud services and products, is working with Highmark Health, a charity healthcare company based in Pittsburgh, to create generative AI tools that will make the patient intake process more personalized. An unnamed customer of Amazon’s AWS says the company is working on a way to use generative AI to look through medical databases for “social determinants of health.” The not-for-profit healthcare network Providence is also getting help from Microsoft Azure to build a generative AI system that will sort messages sent by patients to care workers automatically.
Ambience Healthcare is making a generative AI app for doctors, Nabla is making an ambient AI assistant for practitioners, and Abridge is making analytics tools for medical documentation. These are some of the most well-known generative AI startups in healthcare.
A lot of money is being put into creative AI projects that aim to improve healthcare, which shows how excited people are about the idea. So far, generative AI in healthcare startups have raised tens of millions of dollars in venture capital. Most health investors say that generative AI has had a big impact on how they choose to spend their money.
However, both doctors and patients aren’t sure if creative AI for healthcare is ready for prime time.
AI that makes new things might not be what people want.
A new Deloitte poll found that only 53% of U.S. consumers thought that generative AI could make healthcare better by, for example, making it easier for people to get care or cutting down on wait times for appointments. Not even half of those people said they thought creative AI would lower the cost of medical care.
Andrew Borkowski, who is the top AI officer at the VA Sunshine Healthcare Network, which is the largest health system run by the U.S. Department of Veterans Affairs, doesn’t think the doubt is unfounded. Borkowski said that using generative AI might not be the best idea yet because it has “significant” problems and people aren’t sure how well it will work.
He told one of the main problems with generative AI is that it can’t handle complicated medical questions or situations. This computer program doesn’t have enough up-to-date clinical information and doesn’t have any human experience, so it can’t give complete medical advice or treatment suggestions.
Several studies show that those points are valid.
A study in the journal JAMA Pediatrics found that OpenAI’s generative AI chatbot, ChatGPT, made 83% of the time when diagnosing pediatric diseases. Some healthcare organizations have tested ChatGPT for limited types of use. It was tested as a diagnostic helper by doctors at Boston’s Beth Israel Deaconess Medical Center. Almost two out of three times, the model gave the wrong diagnosis as its top answer.
The generative AI of today also has trouble with medical administrative jobs that doctors have to do every day. GPT-4 failed 35% of the time on the MedAlign test, which checks how well generative AI can do tasks like searching across notes and summarizing patient health data.
OpenAI and a lot of other companies that make creative AI say that their models should not be used to give medical advice. Some people, like Borkowski, say they could do more. Borkowski said, “Relying only on generative AI for healthcare could lead to wrong diagnoses, wrong treatments, or even situations that are life-threatening.”
Jan Egger, who is in charge of AI-guided treatments at the Institute for AI in Medicine at the University of Duisburg-Essen and studies how new technologies can be used to improve patient care, agrees with Borkowski. At the moment, he thinks that the only safe way to use creative AI in healthcare is with a doctor close by.
Egger said, “The results can be totally wrong, and it’s getting harder and harder to keep this in mind.” “Sure, generative AI can be used for things like writing release letters ahead of time. It is the doctors’ job to check it out and make the final decision on it.
Generative AI can keep stereotypes alive.
One very bad thing that can happen when creative AI is used in healthcare is that it can reinforce stereotypes.
In a 2023 study from Stanford Medicine, researchers asked ChatGPT and other generative AI–powered robots questions about how well kidneys work, how hard lungs can breathe, and how thick skin is. The co-authors found that ChatGPT’s answers were not only often wrong, but some of them also supported long-held false beliefs that there are biological differences between black and white people. These false beliefs have caused doctors to make wrong diagnoses of health problems.
In a strange way, the people who are most likely to be hurt by generative AI for healthcare are also the ones who are most likely to use it.
It was found that people who don’t have health insurance—mostly people of color, according to a KFF study—are more likely to try generative AI to help them find a doctor or get help for their mental health. If bias shows up in the AI’s suggestions, it could make unequal treatment worse.
But some experts say that creative AI is getting better at this.
In a late 2023 Microsoft study, researchers said that GPT-4 helped them get 90.2% accuracy on four difficult medical standards. It was impossible for Vanilla GPT-4 to get this score. What the researchers say, though, is that they were able to raise the model’s score by up to 16.2 percentage points by using response engineering, which means making GPT-4 do certain things. It’s important to note that Microsoft is a big backer in OpenAI.
Besides robots
But you can do more with generative AI than just ask a robot a question. Some experts think that generative AI could be very useful for medical imaging.
A group of scientists released a study in July called Complementarity-driven Deferral to Clinical Workflow (CoDoC) that showed off a system. The system’s job is to figure out when medical image experts should use AI to make diagnoses instead of the old ways of doing things. The co-authors say that CoDoC did better than experts and cut down on clinical workflows by 66%.
In November, a Chinese study team showed off Panda, an AI model that can look at X-rays and find possible pancreatic lesions. A study showed that Panda was very good at classifying these so-called “lesions,” which are usually found too late for surgery to help.
An expert in generative AI at the University of Oxford, Arun Thirunavukarasu, said that there is “nothing unique” about it that stops it from being used in healthcare settings.
“Generative AI technology could be used for less exciting things in the short to medium term,” he explained. “These include fixing typos in text, automatically recording notes and letters, and making search functions better to make electronic patient records more useful.” “There is no reason why generative AI technology couldn’t be used right away in these kinds of roles if it works.”
“Strong science”
There is some hope for generative AI in certain areas of medicine, but experts like Borkowski say that there are still technical and legal issues that need to be fixed before it can be accepted as a general healthcare tool that helps.
“There are big privacy and safety concerns about using generative AI in healthcare,” Borkowski said. “Because medical data is private and could be misused or accessed by people who aren’t supposed to, there are big risks to patient privacy and trust in the healthcare system.” In addition, the rules and laws about using generative AI in healthcare are still changing. There are still issues like responsibility, data protection, and the practice of medicine by machines that need to be resolved.
There needs to be “rigorous science” behind tools that are used by patients, even Thirunavukarasu, who is very positive about generative AI in healthcare.
“There should be pragmatic randomized control trials demonstrating clinical benefit to justify deployment of patient-facing generative AI,” he said. “This is especially true when there is no direct clinician oversight.” “Going forward, proper governance is necessary to catch any harms that weren’t expected after deployment at scale.”
The World Health Organization just announced guidelines that support this kind of science and human oversight of generative AI in healthcare. The guidelines also call for auditing, transparency, and impact assessments of this AI by third parties that are not part of the healthcare system. The WHO’s guidelines say that the goal is to get a wide range of people involved in the creation of generative AI for healthcare and give them a chance to voice their concerns and give feedback at all stages of the process.
Also Read: An Attack Was Made on the Us Health Tech Company Change Healthcare
Borowczyk said, “the widespread use of medical generative AI may be… potentially harmful to patients and the healthcare industry as a whole until the concerns are adequately addressed and the right safeguards are put in place.”
What do you say about this story? Visit Parhlo World For more.