How to treat cancer with Chat GPT, can I believe it? U.S. Comprehensive Cancer Network’s Treatment Guidelines And Mismatch 34.3%


  

Chat GPT is gaining popularity because it can quickly respond to all types of questions.

Attempts to introduce artificial intelligence are being made in all fields. The medical field is no exception, with Chat GPT already providing medical counseling. In particular, Chat GPT is gaining popularity because it can quickly respond to all types of questions.

But there is not only a rosy future. There are also many concerns that the rapid application of artificial intelligence technology can cause risks in the medical field. According to a study published in the Journal of the American Medical Association (JAMA Oncology), serious defects were found in the medical advice presented by Chat GPT for cancer treatment. In a study jointly conducted by Boston Children’s Hospital and others, the researchers created 104 questionnaires related to lung cancer, prostate cancer, and breast cancer and asked Chat GPT to recommend a treatment. The answers presented by Chat GPT were reviewed by four oncologists according to five criteria and scored. To measure the quality of advice given by Chat GPT to patients, we compared it with cancer treatment guidelines from the U.S. Comprehensive Cancer Network (NCCN). As a result of the review, as many as 34.3% of Chat GPT presented one or more treatment recommendations that did not match NCCN. In addition, about 13% of the responses presented by Chat GPT showed “welcome.” This may sound true, but it means that it is completely inaccurate or unrelated to the question. Dr. Harvey Castro, an emergency medical specialist and AI expert, pointed out, “It is an important concern because incorrect information can lead to harmful decisions by patients.” “The Chat GPT’s response can sound very human-like and quite convincing,” said Daniel Bitterman, co-author of the study and assistant professor of radiation oncology at Harvard Medical School. “The problem is that there are subtle differences between patients in clinical decisions.” “The answer can be very subtle, so ChatGPT or other large-scale language models are not yet available,” Bitterman stressed.

JULIE KIM

US ASIA JOURNAL