By entering your email and pushing continue, you are agreeing to Fox News’ Terms of Use and Privacy Policy, which includes our Notice of Financial Incentive. To access the content, check your email and follow the instructions provided.\n <\/div>\n
Having trouble? Click here.<\/footer>\n<\/div>\n<\/div>\n
Dr. AI will see you now.<\/p>\n
It might not be that far from the truth, as more and more physicians are turning to artificial intelligence<\/u> to ease their busy workloads.<\/p>\n
Studies have shown that up to 10% of doctors are now using ChatGPT, a large language model (LLM) made by OpenAI \u2014 but just how accurate are its responses?<\/p>\n
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?<\/u><\/strong><\/p>\n A team of researchers from the University of Kansas Medical Center decided to find out.<\/p>\n
“Every year, about a million new medical articles are published in scientific journals, but busy doctors don\u2019t have that much time to read them,” Dan Parente, the senior study author and an assistant professor at the university, told Fox News Digital.<\/p>\n
\n
<\/source><\/source><\/source><\/source><\/picture><\/div>\n\n
\n
A team of researchers at the University of Kansas decided to find out whether AI is truly helping doctors.<\/span> (iStock)<\/span><\/p>\n<\/div>\n<\/div>\n<\/div>\n“We wondered if large language models \u2014 in this case, ChatGPT<\/u> \u2014 could help clinicians review the medical literature more quickly and find articles that might be most relevant for them.”<\/p>\n
WHAT IS CHATGPT?<\/u><\/strong><\/p>\n For a new study published in the Annals of Family Medicine, the researchers used ChatGPT 3.5 to summarize 140 peer-reviewed studies from 14 medical journals.<\/p>\n
Seven physicians then independently reviewed the chatbot\u2019s responses, rating them on quality, accuracy and bias.<\/p>\n
The AI responses were found to be 70% shorter than real physicians\u2019 responses, but the responses rated high in accuracy (92.5%) and quality (90%) and were not found to have bias.<\/p>\n
\n
<\/source><\/source><\/source><\/source><\/picture><\/div>\n\n
\n
AI responses, such as those from ChatGPT, were found to be 70% shorter than real physicians’ responses in a new study.<\/span> (Frank Rumpenhorst\/picture alliance via Getty Images)<\/span><\/p>\n<\/div>\n<\/div>\n<\/div>\nSerious inaccuracies and hallucinations were “uncommon” \u2014 found in only four of 140 summaries.\u00a0<\/p>\n
“One problem with large language models is also that they can sometimes \u2018hallucinate,\u2019 which means they make up information that just isn\u2019t true,” Parente noted.\u00a0<\/p>\n
CHATGPT FOUND BY STUDY TO SPREAD INACCURACIES WHEN ANSWERING MEDICATION QUESTIONS<\/u><\/strong><\/p>\n “We were worried that this would be a serious problem, but instead we found that serious inaccuracies and hallucination were very rare.”<\/p>\n
Out of the 140 summaries, only two were hallucinated, he said.<\/p>\n
Minor inaccuracies were a little more common, however \u2014 appearing in 20 of 140 summaries.<\/p>\n
\n
<\/source><\/source><\/source><\/source><\/picture><\/div>\n\n
\n
A new study found that ChatGPT also helped physicians figure out whether an entire journal was relevant to their medical specialty.<\/span> (iStock)<\/span><\/p>\n<\/div>\n<\/div>\n<\/div>\n“We also found that ChatGPT could generally help physicians figure out whether an entire journal was relevant to a medical specialty<\/u> \u2014 for example, to a cardiologist or to a primary care physician \u2014 but had a lot harder of a time knowing when an individual article was relevant to a medical specialty,” Parente added.<\/p>\n
CHATGPT FOUND TO GIVE BETTER MEDICAL ADVICE THAN REAL DOCTORS IN BLIND STUDY: \u2018THIS WILL BE A GAME CHANGER\u2019<\/u><\/strong><\/p>\n Based on these findings, Parente noted that ChatGPT could help busy doctors and scientists decide which new articles in medical journals are most worthwhile for them to read.\u00a0<\/p>\n
“People should encourage their doctors to stay current with new advances in medicine so they can provide evidence-based care,” he said.<\/p>\n
‘Use them carefully’<\/strong><\/h2>\nDr. Harvey Castro, a Dallas-based<\/u> board-certified emergency medicine physician and national speaker on artificial intelligence in health care, was not involved in the University of Kansas study but offered his insights on ChatGPT use by physicians.<\/p>\n
“AI’s integration into health care<\/u>, particularly for tasks such as interpreting and summarizing complex medical studies, significantly improves clinical decision-making,” he told Fox News Digital.<\/p>\n\n
<\/source><\/source><\/source><\/source><\/picture><\/div>\n\n
\n
Dr. Harvey Castro of Dallas noted that ChatGPT and other AI models have some limitations.<\/span> (Dr. Harvey Castro)<\/span><\/p>\n<\/div>\n<\/div>\n<\/div>\n“This technological support is critical in environments like the ER, where time is of the essence and the workload can be overwhelming.”<\/p>\n
Castro noted, however, that ChatGPT and other AI models have some limitations.<\/p>\n
\n“It’s important to check that the AI is giving us reasonable and accurate answers.”<\/p>\n
<\/p><\/blockquote>\n
“Despite AI’s potential, the presence of inaccuracies in AI-generated summaries \u2014 although minimal \u2014 raises concerns about the reliability of using AI as the sole source for clinical decision-making,” Castro said.\u00a0<\/p>\n
“The article highlights a few serious inaccuracies within AI-generated summaries, underscoring the need for cautious integration of AI tools in clinical settings.”\u00a0<\/p>\n
\n
<\/source><\/source><\/source><\/source><\/picture><\/div>\n\n
\n
It’s still important for doctors to review and oversee all AI-generated content, one expert in AI noted.\u00a0<\/span> (Cyberguy.com)<\/span><\/p>\n<\/div>\n<\/div>\n<\/div>\nGiven these potential inaccuracies, particularly in high-risk scenarios, Castro stressed the importance of having health care professionals<\/u> oversee and validate AI-generated content.<\/p>\n
The researchers agreed, noting the importance of weighing the helpful benefits of LLMs like ChatGPT with the need for caution.<\/p>\n
CLICK HERE TO SIGN UP FOR OUR HEALTH NEWSLETTER<\/u><\/strong><\/p>\n “Like any power tool, we need to use them carefully,” Parente told Fox News Digital.\u00a0<\/p>\n
“When we ask a large language model to do a new task \u2014 in this case, summarizing medical abstracts \u2014 it\u2019s important to check that the AI is giving us reasonable and accurate answers.”<\/p>\n
CLICK HERE TO GET THE FOX NEWS APP<\/u><\/strong><\/p>\n As AI becomes more widely used in health care, Parente said, “we should insist that scientists, clinicians, engineers and other professionals have done careful work to make sure these tools are safe, accurate and beneficial.”<\/p>\n
For more Health articles, visit <\/u><\/strong><\/i>foxnews.com\/health<\/u><\/strong><\/i><\/p>\n<\/div>\n[ad_2]\n","protected":false},"excerpt":{"rendered":" [ad_1] Join Fox News for access to this content Plus special access to select articles and other premium content with your account – free of charge. Please enter a valid email address. By entering your email and pushing continue, you are agreeing to Fox News’ Terms of Use and Privacy Policy, which includes our Notice …<\/p>\n","protected":false},"author":1,"featured_media":33810,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[8],"tags":[],"_links":{"self":[{"href":"https:\/\/farratanews.online\/wp-json\/wp\/v2\/posts\/33808"}],"collection":[{"href":"https:\/\/farratanews.online\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/farratanews.online\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/farratanews.online\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/farratanews.online\/wp-json\/wp\/v2\/comments?post=33808"}],"version-history":[{"count":0,"href":"https:\/\/farratanews.online\/wp-json\/wp\/v2\/posts\/33808\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/farratanews.online\/wp-json\/wp\/v2\/media\/33810"}],"wp:attachment":[{"href":"https:\/\/farratanews.online\/wp-json\/wp\/v2\/media?parent=33808"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/farratanews.online\/wp-json\/wp\/v2\/categories?post=33808"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/farratanews.online\/wp-json\/wp\/v2\/tags?post=33808"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}