You are here
Artificial Intelligence (AI) and medical device software
Information for software manufacturers about how we regulate AI medical devices.
We regulate Artificial Intelligence (AI) as a medical device when it is used for diagnosis, prevention, monitoring, prediction, prognosis, treatment, or alleviation of disease, injury or disability.
When AI is considered a medical device
AI would generally be a medical device if it is intended to be used for:
- diagnosis, prevention, monitoring, prediction, prognosis, or treatment of a disease, injury, or disability
- alleviation of, or compensation for, an injury or disability
- investigation of the anatomy or of a physiological process
- control or support of conception.
Medical devices can include any app, website, program, internet-based service or package. It may be on a watch, phone, tablet, laptop or other computer, or part of a hardware medical device. It may be part of an ecosystem with cloud components or a standalone product.
Examples of medical devices that use AI include:
- apps that aid in diagnosing melanoma from mobile phone photos
- analytics in the cloud that predict patient deterioration
- chatbots that give treatment suggestions to consumers or health professionals
- clinical decision support that uses generative AI to create symptom and diagnosis summaries
- eye disease screening apps that cover diabetic retinopathy, glaucoma and macular degeneration
- radiology image analysis to aid in diagnosing pneumothorax, pneumonia, tumours.
How AI medical devices are regulated
Generally, regulated medical devices are included in the Australian Register of Therapeutic Goods (ARTG).
Changes to the regulations for software based medical devices, which may include AI, took effect from 25 February 2021, including a transition period for those already included in the ARTG.
We have published guidance to help developers determine whether software is a medical device – the “Is my software regulated?” flowchart on our website, in addition to other guidance about recent regulatory changes and boundary clarifications for software and FAQs.
Further detail about regulation of software based medical devices is published on our website, including risk classification. If an AI medical device is supplied to people in Australia, then it comes under the medical device regulations, regardless of where it is based.
Generative AI regulation
Software that incorporates generative AI such as large language models (LLMs), text generators, and multimodal generative AI are all regulated as a medical device if they meet the definition.
Artificial intelligence text-based products like ChatGPT, GPT-4, Gemini (formerly Bard), Claude and others have recently received media attention.
Clinical and technical evidence must demonstrate the safety, reliability and performance of the product using the generative AI such as LLM to the same standard as other medical devices – for higher risk products, clinical and technical evidence are required to be more stringent.
It is important to note that it is unlikely to be a medical device, and is not regulated by us, where:
there is no medical purpose or claim associated with the product using the generative AI such as LLM, or
it does not meet the definition of a medical device as defined in the section 41BD of the Therapeutic Good Act 1989.
Large language models and chatbot regulation
When LLMs or chatbots have a medical purpose and are supplied to Australians, they may be subject to medical device regulations for software and need approval by us.
Regulatory requirements are technology-agnostic for software-based medical devices and apply regardless of whether the product incorporates components like AI, chatbots, cloud, mobile apps or other technologies.
In these cases, where a developer adapts, builds on or incorporates a LLM into their product or service offering to a user or patient in Australia - the developer is deemed the manufacturer and has obligations under section 41BD of the Therapeutic Good Act 1989
AI developers will need to understand and demonstrate the sources and quality of text inputs used to train and test the model, and in clinical studies, in addition to showing how the data is generalisable and appropriate for use on Australian populations.
Evidence requirements for software using AI
In addition to general software requirements, for software that uses AI or machine learning (ML), the manufacturer is required to possess evidence that is sufficiently transparent to enable evaluation of safety and performance of the product.
Transparency means that a ‘black box’ approach would not be considered acceptable (i.e., we will not accept that no evidence can be provided because it is ‘black box’ technology). While this continues to be a rapidly evolving area, this will typically include artefacts for the following:
- Overarching statement of the objectives of the AI/ML model
- Algorithm and model design, including tuning techniques used
- Data used for training and testing – and generalisability where applicable
- Size of data sets must be sufficiently large to be statistically credible
- Information about populations that this data is based on and justification for how this data would be appropriate for the Australian population and sub-populations for whom the AI is intended to be used. Independent global draft consensus standards have been developed for datasets used in health AI, which could provide a basis for structuring this information.
- Risk management to address risks including but not limited to overfitting, bias, performance degradation such as data drift.
In this case, the manufacturer is the organisation or individual that develops the software.
There are also evidence requirements for Clinical validation.
Evidence requirements continue through the product lifecycle and include robust post-market monitoring practice to ensure continued device performance and model accuracy.
Synthetic data use
Data can be generated to augment or replace data from real patients or devices to use in training or validation – often referred to as synthetic data. Synthetic data may be considered acceptable in some instances – an explanation of the rationale and description of the origin and construction of the synthetic datasets will be required. Examples may include rare diseases where data is scarce or situations with data privacy considerations.
For many use cases, synthetic data may not provide sufficient depth and variability to adequately validate a product. Where data is readily available in large volumes, it would be less likely that synthetic data would be considered to be appropriate.
For clinical validation, synthetic data may supplement but will generally not replace clinical data in satisfying clinical evidence requirements.
Whole of government AI oversight
The Australian Commission for Safety and Quality in Health Care (the Commission) is responsible for e-Health safety and is developing information to support AI safety.
We are working with the Commission and other parts of the Australian Government Department of Health and Aged Care to ensure safety and performance. While we are also balancing the need to minimise regulatory burden for AI. This includes working with the Department of Industry, Science and Resources on their whole of government work on responsible AI.
What you need to do
Check the Is my software regulated? flowchart to see if your software is a medical device and that you understand all your obligations, including those for privacy, data, cyber security and advertising to health professionals or consumers.
If you make changes to your product, its regulatory status may change and there are requirements to notify us for either a Device Change Request or regarding Conformity Assessment.
Ask if you need further guidance or clarification after reviewing the information on our website. You can also request a formal pre-submission meeting prior to submitting an application.
You can contact us via our email: digital.devices@tga.gov.au.