Proceed with caution regarding AI-generated ophthalmic references and abstracts
Clinicians should be alert to the fact that while artificial intelligence (AI) is capable of generating ideas and references, it is crucial to thoroughly vet and fact-check any medical research content that AI produces.
Researches from Cole Eye Institute, Cleveland Clinic Foundation studies 2 versions of an AI chatbot to generate scientific abstracts and 10 references for clinical research questions across 7 ophthalmology subspecialties.
A so-called hallucination rate for references generated by the earlier and updated versions of the chatbot but which could not be verified was calculated and compared. (Hallucination is a term used for a phenomenon where artificial intelligence algorithms and deep learning neural networks produce results that are not real, do not correspond to any data on which the algorithm was trained, or other patterns that can be identified. References that are not real do not match any data).
It is noted that in some cases the chatbot produced links that are not real. On top of that, the chatbot was unable to distinguish nuances in the scientific literature (e.g. oral vs intravenous dosing of steroids in optic neuritis). Current AI detectors perform poorly in detecting AI-generated text, especially with the newer version of AI chatbots. The scientific community at large must be wary of the implications of using generative AI for research purposes.
You can read more here https://www.ophthalmologytimes.com/view/proceed-with-caution-regarding-ai-generated-ophthalmic-references-and-abstracts