The Stanford research study comes a couple years after a research study published in PLOS Medicine trained machine knowing models to detect pneumonia in chest X-rays from the Washington, D.C.-based National Institutes of Health Clinical Center, New York City-based Mount Sinai Hospital and Indianapolis-based Indiana University Network for Patient Care. In three of the five experiments performed, the AI system carried out significantly even worse when applied to patient data from locations various from the one on which it had been trained.
” It became evident that all the datasets just appeared to be coming from the very same sorts of places: the Stanfords and UCSFs and Mass Generals,” a member of the research group, Amit Kaushal, MD, PhD, informed STAT.
© Copyright ASC COMMUNICATIONS 2020. Interested in LINKING to or REPRINTING this content? View our policies by clicking here.
The absence of geographic diversity within the information that comprises medical artificial intelligence systems might imply the technology is unduly applying a one-size-fits-all method to client care, according to research study published Sept. 22 in JAMA Network Open.
More articles on synthetic intelligence: 4 members of Congress demand research study on racism within medical algorithmsDefense Department, Philips expand AI research for early detection of infectious diseasesAscension Ventures signs up with $106M investment round for health care AI startup Olive.
Katie Adams –
Friday, September 25th, 2020
Stanford (Calif.) University researchers found that since lots of medical AI scientists are affiliated with prestigious coastal academic medical centers and have access to these institutions training data, the records utilized to inform medical AI systems come primarily from patients in the 3 states: California, New York and Massachusetts.
The scientists argue the lack of geographic diversity in medical AI training data might be troublesome due to the fact that lots of medical elements vary greatly by area, such as the prevalence of specific illness, scientific treatment patterns and access to care. Dr. Kaushal told STAT these aspects “end up getting baked into the dataset and end up being implicit assumptions in the information, which might not be valid assumptions across the country.”.