Home Education Chatgpt, Copilot or Gemini? They discover the best Amnesty International to study...

Chatgpt, Copilot or Gemini? They discover the best Amnesty International to study medicine

7
0


Among the countless uses that the average user can provide to artificial intelligence, it may be one of the most uncomfortable tools consisting of “converting it” to a teacher. It is not a crazy idea: According to a study published in the scientific journal BMC Medical Education, there are already Amnesty International tools with the possibility of performing educational work. Even in medicine. Although whatever still reaches the accuracy and performance of the teacher, there are at least some approaching. More, GPT chat.

"There is an unprecedented increase in the use of obstetric artificial intelligence in medical education, which makes it necessary a hundred, 10 points higher Copilot and 20 more than Gemini, which will be less reliable Cohen Kaba applied to measure peer compatibility, and obtain the following results: GPT: 0.84 (high with teacher). Copilot: 0.69 (moderate). Gemini: 0.53 (low). Although the GPT chat has proven to be the most accurate model, researchers warn that these systems cannot be replaced, at least, a medical teacher, given that the contrast in their responses and the absence of a reliable clinical standard can display the formation of future doctors. Researchers warn that these systems cannot be replaced, at least, a medical teacher

"He added that the study provides an approach to assess the various LLMS accuracy and concluded that the GPT chat is superior to others in solving medical questions. Low accuracy generally indicates that it should be used with caution in educational environments. “On the other hand, they point out that the analysis” is limited to 40 multi -options questions, although it is varied, not all medical specialties may be completely. ”

"Inconsistent" Artificial intelligence in medicine concludes with the authors of this study that the merging of LLMS into medical education represents moral and practical challenges. “From an ethical terms, dependence on medical knowledge resulting from artificial intelligence generates concerns about accuracy, wrong information and patient safety,” they apply. “In practice, although LLMS provides accessible educational tools, they must be perforated with specific field data to improve reliability. In addition, teachers should train students to assess the content resulting from artificial intelligence, thus ensuring that enlightened decisions are made instead of blind dependence.”

Source link