Australia’s leading medical association has called for strict regulations and transparency in the use of artificial intelligence (AI) in healthcare. This comes after doctors in Perth were warned not to use ChatGPT to write medical notes. The Australian Medical Association (AMA) has stated that stronger rules are necessary to protect patients, healthcare professionals, and to build trust.
In a submission to the federal government’s discussion paper on safe and responsible AI, the AMA highlighted that Australia is lagging behind other countries in regulating AI. The association emphasised the need for safeguards to ensure that clinicians have the final say in decisions and that patients provide informed consent for AI-based treatments and diagnostics.
The call for protections also extends to the protection of patient data and the need for ethical oversight to prevent health inequalities. The AMA suggests that Australia should consider the proposed EU Artificial Intelligence Act, which aims to categorize AI risks and establish an oversight board.
The AMA's submission states that any clinical decisions influenced by AI should involve human intervention at specific points in the decision-making process. Ultimately, the final decision should always be made by a human, ensuring meaningful and responsible decision-making.
- CyberBeat
CyberBeat is a grassroots initiative from a team of producers and subject matter experts, driven out of frustration at the lack of media coverage, responding to an urgent need to provide a clear, concise, informative and educational approach to the growing fields of Cybersecurity and Digital Privacy.
If you have a story of interest, a comment, a concern or if you'd just like to say Hi, please contact us
We couldn't do this without the support of our sponsors and contributors.