The AI revolution: for patients, promise and challenges ahead
(HealthDay)—Streaks of color swirl through a pulsing, black-and-white image of a patient’s heart. They represent blood, and they’re color-coded based on speed: turquoise and green for the fastest flow, yellow and red for the slowest.
This real-time video, which can be rotated and viewed from any angle, allows doctors to spot problems like a leaky heart valve or a failing surgical repair with unprecedented speed. And artificial intelligence (AI) imaging technology made it possible.
“It’s quite simple, it’s like a video game,” said Dr. Albert Hsiao, an associate professor of radiology at the University of California, San Diego, who developed the technology while a medical resident at Stanford University.
There’s a lot going on behind the scenes to support this simplicity. Each 10-minute scan produces 2 to 10 gigabytes of data. To handle such huge, complicated data sets, Hsiao and his colleagues at Arterys, the company he helped found in 2012 to develop the technology, decided to build the infrastructure on the internet, where it can be accessed by servers from other researchers.
And now, investigators around the world are using this cloud-based infrastructure to share and test medical AI imaging models in the Arterys Marketplace. “We’ve made it almost as easy to get medical AI online as to upload a YouTube video,” said Arterys product strategy manager Christian Ulstrup.
Arterys decided to open up its $50 million platform to all comers—a move that raised eyebrows in the competitive world of health care and medicine—because the company realized that the full potential of the technology to transform medicine couldn’t be realized without collaboration from others, Ulstrup explained.
“There are all these brilliant researchers, startup founders and individual developers who are working with machine learning models with the data they find online,” Ulstrup explained. “The thing that’s really heartbreaking is most of these models that could be used to meet unmet clinical needs end up dying on hard drives. We’re just trying to connect these people who don’t really have a communication channel.”
Here’s a video of the AI heart scan in action:
Artificial intelligence—basically, computer programs or machines that can learn—has the potential to open up access to health care, improve health care quality and even reduce costs, but it also carries real risks. AI tools have to be “trained” with huge quantities of high-quality data, and to be useful they have to be robust enough to work in any setting. And using AI that is trained on biased data could harm patients.
AI as ‘double-edged sword’
“It’s very important that we start looking at the unconscious biases in the data to make sure that we don’t hardwire discriminatory recommendations,” said Dr. Kevin Johnson, chair of biomedical informatics at Vanderbilt University Medical Center in Nashville. He prefers the term “augmented intelligence” to “artificial intelligence,” since AI aims to extend the abilities of clinicians, not to steal their jobs.
One key application of AI in health care will be to identify patients who are at risk of poor outcomes, but such predictions are worse than useless if doctors don’t know how to prevent these outcomes, or the resources aren’t available to help patients, Johnson added. “We don’t have the work force who plays the role of the catcher’s mitt” and can step in and help these at-risk patients, Johnson noted, especially in a healthcare system now stretched to the limit by the coronavirus pandemic.
“I think we have to think creatively about how we restructure the system to support some of the outcomes that are of interest to us,” he said.
Dr. Ravi Parikh is an instructor in medical ethics and health policy at the Perelman School of Medicine at the University of Pennsylvania in Philadelphia. He pointed out that “AI and machine learning are sort of a double-edged sword, particularly in my field of oncology.”
AI has proven its potential for interpreting images, for example diagnosing lung cancer from a CT scan, but when it comes to using AI for supporting clinical decisions, like whether this patient should have chemo or that patient should go to the hospital, there’s a risk that it won’t help patients or could even be harmful, Parikh noted.
“Even though you might have an AI that’s accurate on the whole, if it’s mischaracterizing an outcome for a specific group of patients you really have to question whether it’s worth it,” he said.
What’s been missing in the development of health care AI, Parikh added, have been rigorous prospective studies to determine whether the technology is actually useful for patients.
Just as the U.S. Food and Drug Administration requires that drug companies run clinical trials to confirm that their product is safe and effective, Parikh said, the FDA should start requiring makers of AI tools to test their safety and effectiveness in humans.
And just as the agency tracks the safety of drugs once they reach the market, the FDA should set up frameworks to study whether the AI algorithms it approves are enforcing existing biases, he noted.
Source: Read Full Article