While 2024 was a great year for consultation regarding the use of AI in healthcare, one expert feels 2025 needs to focus on translating these words into actions.
The founder of the Australian Alliance for AI in Healthcare has called for the nation to grab their coats and start walking towards getting the AI in healthcare policy roadmap going.
Speaking almost a year to the day after the launch of the AAAIH’s National Policy Roadmap for Artificial Intelligence, the Alliance’s founder was pleasantly surprised with the progress made – but feels that success over the next 12 months will be measured by actions, rather than words.
“If you had asked me a year ago when I launched whether we would have achieved as much as we [did the first 12 months], the answer would have been no,” said Professor Enrico Coiera, the director of Macquarie University’s Centre for Health Informatics at the Australasian Institute of Digital Health’s AI Care conference last week.
“I thought there would have been much more resistance. I’m delighted that there has been real engagement going on.”
The AAAIH’s roadmap made 16 recommendations across five priority areas to outline how Australia can establish a fully funded plan to create an AI-enabled healthcare system that delivers personalised healthcare in a safe, ethical, and sustainable fashion by the end of 2025.
“The roadmap is really an attempt to identify those low-hanging fruits that we can use to plug urgent gaps now, using existing structures and organisations where we can,” said Professor Coiera.
“It doesn’t mean that these are the only things that need to happen – in fact, there’s a lot that needs to happen – but at least focusing on some early wins is not a bad idea.”
Read on to see how much progress had been made on each of the five priority areas.
AI safety, quality, ethics, and security
Professor Coiera was the most positive about the progress made in the first priority area, which he felt was the most critical aspect of the roadmap.
Although a national AI in healthcare council that brings together all the different players involved in the space has not been established, Professor Coiera was pleased to see that there was a whole-of-government approach being taken.
“There is still a step between that and something like the national council where you might have all the state jurisdictions also joining in, so I still think that next step is important,” he said.
There had also been some progress on the recommendation that healthcare organisations that use AI should receive accreditation for demonstrating they meet particular safety and quality practice standards, with the Australian Commission on Safety and Quality in Health Care releasing a set of guidelines on AI implementation in hospitals.
“It has no accreditation requirement behind it, but it’s a start,” Professor Coiera said, while noting that individual states had also been active in this space. NSW has established an AI-specific taskforce to consider how the technology would be implemented in their hospitals.
Professor Coiera felt there had been a better response to the recommendation to communicate the need for caution when using generative AI in a clinical setting, pointing to the TGA’s activity in this space regarding the use and regulation of generative AI and other technologies.
“But clearly [there are] technologies based on generative AI creating records which may impact care which fall outside the current framework, so this is something that needs to be finalised,” he said.
Workforce
Professor Coiera was shocked to see that the first recommendation in this priority area – that a shared code of conduct for the safe and responsible use of AI in healthcare settings be developed – had been completed.
“AHPRA actually did this, they listened … and this is something that 10 to 12 national boards have as their shared code of practice,” said Professor Coiera.
“It’s a great start, and I’m quite pleased to see that happen.”
Several other professional groups – including general practitioners, radiologists, physiotherapists, and nurses had also been active in developing profession-specific codes for the responsible use of AI, largely focusing on the use of digital scribes.
“It shows what happens when there’s a genuine clinical need – there’s no pushback [to adopting the use of AI technology, or to creating relevant guidelines to shape its use],” Professor Coiera said.
“It’s a clear sign that at least some professional groups get that there are some very big changes happening.”
Consumers
Helping consumers understand how to safely use AI to help navigate the healthcare system was one area where little progress had been made.
While the roadmap made three recommendations in this area that would directly benefit consumers, no real outcomes had been delivered in this space.
“[The Consumers Health Forum of Australia] has jumped into the fray and made some very clear recommendations to the Senate Select Committee, and a lot of them emphasised the same topics we emphasised in the roadmap, but I’ve got no outcomes to report from the consumer section,” Professor Coiera said.
“This is an area where more needs to happen.”
Industry
The response to the industry-specific recommendations were similarly underwhelming.
While the Department of Industry has launched four centres aimed at providing industry and community education around adopting the use of AI, none specifically focus on healthcare.
There had also been little movement regarding improving access to clinical data to support the development of new AI products at a federal level, with Professor Coiera highlighting that the various state governments – who sit on very large collections of health data – would need to step up to the plate.
Professor Coiera did acknowledge the work the NSW government had done in this space, developing a “data lake” that could be used to help develop generative models and other technologies.
“[This is] a great resource for industry, so hopefully those sorts of data resources get developed across the nation and become more and more accessible to industry,” Professor Coiera said.
Research
The roadmap made a single recommendation in this priority area – that the federal government provide significant targeted funding support for healthcare AI research – as the move to incorporate AI into healthcare will not get anywhere without research into the development and evaluation of new technologies.
“For example, right now there is a huge research gap around the safety and effectiveness of digital scribes. There are really no great studies that tell us the sorts of impact they have… nothing that I would call compelling one way or the other,” said Professor Coiera.
Rather than the National Health and Medical Research Council and the Medical Research Future Fund allocating new money to AI-specific research, these organisations did a retrospective review of funding that had been awarded in recent years, claiming that they had awarded over $200 million in grant funding for AI projects between 2015 and 2023.
“Our calculations suggest that this is still less than 1% of spending on research,” Professor Coiera said.
“And we’re talking about one of the most fundamental changes to healthcare practice, so we can do better.”
AI Care was held in Melbourne on 27 and 28 November.