Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2024-04-18 15:15:17| Engadget

As learning language models (LLMs) continue to advance, so do questions about how they can benefit society in areas such as the medical field. A recent study from the University of Cambridge's School of Clinical Medicine found that OpenAI's GPT-4 performed nearly as well in an ophthalmology assessment as experts in the field, the Financial Times first reported. In the study, published in PLOS Digital Health, researchers tested the LLM, its predecessor GPT-3.5, Google's PaLM 2 and Meta's LLaMA with 87 multiple choice questions. Five expert ophthalmologists, three trainee ophthalmologists and two unspecialized junior doctors received the same mock exam. The questions came from a textbook for trialing trainees on everything from light sensitivity to lesions. The contents aren't publicly available, so the researchers believe LLMs couldn't have been trained on them previously. ChatGPT, equipped with GPT-4 or GPT-3.5, was given three chances to answer definitively or its response was marked as null.  GPT-4 scored higher than the trainees and junior doctors, getting 60 of the 87 questions right. While this was significantly higher than the junior doctors' average of 37 correct answers, it just beat out the three trainees' average of 59.7. While one expert ophthalmologist only answered 56 questions accurately, the five had an average score of 66.4 right answers, beating the machine. PaLM 2 scored a 49, and GPT-3.5 scored a 42. LLaMa scored the lowest at 28, falling below the junior doctors. Notably, these trials occurred in mid-2023.  While these results have potential benefits, there are also quite a few risks and concerns. Researchers noted that the study offered a limited number of questions, especially in certain categories, meaning the actual results might be varied. LLMs also have a tendency to "hallucinate" or make things up. That's one thing if its an irrelevant fact but claiming there's a cataract or cancer is another story. As is the case in many instances of LLM use, the systems also lack nuance, creating further opportunities for inaccuracy.This article originally appeared on Engadget at https://www.engadget.com/gpt-4-performed-close-to-the-level-of-expert-doctors-in-eye-assessments-131517436.html?src=rss


Category: Marketing and Advertising

 

Latest from this category

20.02Xbox head Phil Spencer is leaving Microsoft
20.02Tunic publisher claims TikTok ran 'racist, sexist' AI ads for one of its games without its knowledge
20.02OpenAI will reportedly release an AI-powered smart speaker in 2027
20.0213-hour AWS outage reportedly caused by Amazon's own AI tools
20.02NASA targets March 6 for Artemis 2 launch to take astronauts around the Moon
20.02Ubisoft lays off 40 staff working on Splinter Cell remake, says game remains in development
20.02AI Update, February 20, 2026: AI News and Views From the Past Week
20.02Engadget Podcast: Instagram on trial and the RAMaggedon rages on
Marketing and Advertising »

All news

21.02US-Iran conflict may spike Indias crude prices and fuel inflation
21.02F&O Talk | What the current long-short ratio tells about FII positioning? Sudeep Shah on Ola, Newgen, 4 more top weekly movers
21.02How to manage one mutual fund portfolio for multiple financial goals
21.02Fund Manager Talk | Multi-asset funds are about balance, not chasing gold: DSP MFs Aparna Karnik
21.02Sunil Singhanias Abakkus Portfolio: 7 stocks gain up to 60% in FY26; 2 new buys in Q3
21.02JSW Steel, Tata Steel poised for margin recovery as domestic demand holds firm: Siddhartha Khemka
21.02Down, Not Defeated: Is the Dollar Ready for Its Comeback?
21.02Concurrent Losers: 12 smallcap stocks fall for 5 straight sessions
More »
Privacy policy . Copyright . Contact form .