In May 2020, healthcare software company Orion Health announced the New Zealand Algorithm Hub, a center for scenario modeling, risk prediction, forecasting and planning to support the country’s response to COVID-19.
For Kevin Ross, PhD, CEO of Precision Driven Health and chair of the hub’s governance group, one key to ethically using machine learning to manage the pandemic was the makeup of the governance group he led.
The group included stakeholders and experts in law, data science, public health, government and the perspective of New Zealand’s indigenous population.
“We ended up asking and answering questions we wouldn’t have thought of otherwise,” said Ross in a presentation on ethical machine learning at HIMSS21, which is taking place in Las Vegas this week.
But these systems can be biased, just like their human creators. A 2019 study published in Science found an algorithm was significantly less likely to refer Black patients to a program that aimed to improve care for patients with complex needs. A research letter in JAMA noted that U.S. patient data algorithms were mostly pulling information from cohorts in California, Massachusetts and New York, which wouldn’t be representative of patients living in other areas.
That’s why it’s important for healthcare providers and researchers to take care to use machine learning ethically, Ross said. But these considerations aren’t new to medicine or research, from the Hippocratic Oath to ethical research standards.
“For all of our core values of delivering excellent care to everyone, we still deliver care that isn’t equitable,” Ross said.
When evaluating machine learning, Ross suggested stakeholders be careful about hype, find thorough evaluations of the technology, put serious effort into correcting biases, and demand transparency in data collection and evaluation.
“At the end of the day, we’re excited about medicine and advances, we’re excited about technology, but all of this is to one end, for the people.”