The following is a summary of “Dr. Google vs. Dr. ChatGPT: exploring the use of artificial intelligence in ophthalmology by comparing the accuracy, safety, and readability of responses to frequently asked patient questions regarding cataracts and cataract surgery,” published in the March 2024 issue of Ophthalmology by Cohen et al.
Patients often search online to learn about eye health, especially cataracts and surgery. Google and large language models like ChatGPT are popular sources. However, the quality of information available is uncertain.
Researchers conducted a retrospective study and examined the accuracy and safety of cataract-related FAQs answered by Google and ChatGPT by assessing the readability of these answers and exploring ChatGPT’s potential in educating patients.
They used online data to collect the top 20 frequently asked questions (FAQs) about cataracts and their surgery from Google. Later, Ophthalmologists checked the accuracy and safety of both Google and ChatGPT responses. The readability of these answers was assessed using five validated readability indices. ChatGPT was asked to create notes, post-operative instructions, and patient education materials according to readability criteria.
The results showed that ChatGPT’s answers to 20 FAQs were significantly longer and more complex than Google’s (P<0.001), averaging a college-grade reading level 14.8. Reviewers could correctly spot human vs. chatbot responses 31% of the time. Google had incorrect or unsuitable content 27% of the time, while ChatGPT only 6% (P<0.001)—reviewers, after comparing, preferred chatbot responses 66% of the time.
Investigators concluded that ChatGPT’s answers on cataract FAQs were better than Google’s, as they contained fewer mistakes and were helpful for patients with higher health literacy.
Source: tandfonline.com/doi/full/10.1080/08820538.2024.2326058
The post AI in Ophthalmology: Comparing Dr. Google and Dr. ChatGPT’s Responses to Cataract-Related Patient FAQs first appeared on Physician's Weekly.