New York, April 5 (IANS) OpenAI’s ChatGPT can be helpful for breast cancer screening advice, but consumers using the Artificial Intelligence (AI) tool for health information still need to confirm information with their doctors, suggests a study.
Researchers at the University of Maryland, in a study, found that the answers generated by Chat GPT provide correct information the vast majority of the time; sometimes, though, the information is inaccurate or even fictitious.
In February 2023, researchers from the varsity’s School of Medicine created a set of 25 questions related to advice on getting screened for breast cancer. They submitted each question to ChatGPT three times to see what responses were generated – the chatbot is known for varying its response each time a question is posed.
Three radiologists fellowship-trained in mammography evaluated the responses; they found that the responses were appropriate for 22 out of the 25 questions.
The chatbot did, however, provide one answer based on outdated information. Two other questions had inconsistent responses that varied significantly each time the same question was posed.
“We found ChatGPT answered questions correctly about 88 per cent of the time, which is pretty amazing,” said Paul Yi , Assistant Professor of Diagnostic Radiology and Nuclear Medicine.
“It also has the added benefit of summarising information into an easily digestible form for consumers to easily understand,” Yi added.
The findings, published in the journal Radiology, showed that ChatGPT correctly answered questions about the symptoms of breast cancer, who is at risk, and questions on the cost, age, and frequency recommendations concerning mammograms.
However, the downside is that it is not as comprehensive in its responses as what a person would normally find on a Google search.
“ChatGPT provided only one set of recommendations on breast cancer screening, issued from the American Cancer Society, but did not mention differing recommendations put out by the Centers for Disease Control and Prevention (CDC) or the US Preventive Services Task Force (USPSTF),” said lead author Hana Haver, a radiology resident at the university’s Medical Center.
In one response deemed by the researchers to be inappropriate, ChatGPT provided an outdated response to planning a mammogram around Covid-19 vaccination.
The advice to delay a mammogram for four to six weeks after getting a Covid-19 shot was changed in February 2022. Inconsistent responses were given to questions concerning an individual’s personal risk of getting breast cancer and on where someone could get a mammogram.
“We’ve seen in our experience that ChatGPT sometimes makes up fake journal articles or health consortiums to support its claims,” said Yi.
“Consumers should be aware that these are new, unproven technologies, and should still rely on their doctor, rather than ChatGPT, for advice.”
According to a recent report by GlobalData, a data and analytics company, the revolutionary technology holds the potential to completely change the healthcare industry. It estimates the total AI market will be worth $383.3 billion in 2030, with a robust 21 per cent compound annual growth rate (CAGR) from 2022 to 2030.
However, the usage of chatbots in patient care and medical research also raises several ethical concerns.
As massive patient data is fed into machine learning to improve the accuracy of chatbots, patient information is vulnerable. The information provided by chatbots might be more inaccurate and misleading, depending on the sources fed into the chatbots, the report said.
–IANS
rvt/vd