Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2024-04-18 15:15:17| Engadget

As learning language models (LLMs) continue to advance, so do questions about how they can benefit society in areas such as the medical field. A recent study from the University of Cambridge's School of Clinical Medicine found that OpenAI's GPT-4 performed nearly as well in an ophthalmology assessment as experts in the field, the Financial Times first reported. In the study, published in PLOS Digital Health, researchers tested the LLM, its predecessor GPT-3.5, Google's PaLM 2 and Meta's LLaMA with 87 multiple choice questions. Five expert ophthalmologists, three trainee ophthalmologists and two unspecialized junior doctors received the same mock exam. The questions came from a textbook for trialing trainees on everything from light sensitivity to lesions. The contents aren't publicly available, so the researchers believe LLMs couldn't have been trained on them previously. ChatGPT, equipped with GPT-4 or GPT-3.5, was given three chances to answer definitively or its response was marked as null.  GPT-4 scored higher than the trainees and junior doctors, getting 60 of the 87 questions right. While this was significantly higher than the junior doctors' average of 37 correct answers, it just beat out the three trainees' average of 59.7. While one expert ophthalmologist only answered 56 questions accurately, the five had an average score of 66.4 right answers, beating the machine. PaLM 2 scored a 49, and GPT-3.5 scored a 42. LLaMa scored the lowest at 28, falling below the junior doctors. Notably, these trials occurred in mid-2023.  While these results have potential benefits, there are also quite a few risks and concerns. Researchers noted that the study offered a limited number of questions, especially in certain categories, meaning the actual results might be varied. LLMs also have a tendency to "hallucinate" or make things up. That's one thing if its an irrelevant fact but claiming there's a cataract or cancer is another story. As is the case in many instances of LLM use, the systems also lack nuance, creating further opportunities for inaccuracy.This article originally appeared on Engadget at https://www.engadget.com/gpt-4-performed-close-to-the-level-of-expert-doctors-in-eye-assessments-131517436.html?src=rss


Category: Marketing and Advertising

 

Latest from this category

17.02Wisconsin brewery raises USD 125K for pro-democracy shopping platform
17.02There's a dedicated channel for Formula 1 in the Apple TV app now
17.02More Rode mics can now connect directly to iPhones and iPads
16.02Call of Duty: Warzone Mobile will go offline on April 17
16.02The Apple Podcasts app is switching to HTTP Live Streaming video technology
16.02The Vatican introduces an AI-assisted live translation service
16.02Layers of 3 revealed via a mysterious trailer and poem
16.02Apple's next event is set for March 4
Marketing and Advertising »

All news

17.02ETMarkets Smart Talk | Selective small & midcaps to outperform; focus on quality over momentum in 2026, says Siddhartha Khemka
17.02Wisconsin brewery raises USD 125K for pro-democracy shopping platform
17.02Positive Breakout: These 12 stocks cross above their 200 DMAs
17.02Global Market Today: Asian stocks edge higher in thin holiday trading
17.02RBI draft norms on mis-selling may hit private banks harder
17.02Unrated debt on the rise as investors seek higher yields
17.02What do RBIs new rules mean for investors in exchange and brokerage stocks?
17.02Brokerages may tap bonds and CPs as bank funding turns 'unsuitable'
More »
Privacy policy . Copyright . Contact form .