Staying AI safe

Earlier this year, the Australian government released an interim response as part of the consultation on Safe and Responsible AI in Australia, which highlighted the government’s recognition of the potential benefits of the technology - but also a lack of public trust in the safe and responsible use of AI systems. That lack of trust is acting as a ‘handbrake’ for business adoption and public acceptance. The interim response signals a mix of short- and long-term actions and objectives, with strong commitments to more consultation with stakeholders to shape Australia's AI regulatory landscape.
The response addresses several key points raised in the submissions, notably the concern that high-risk AI applications currently operate with insufficient regulatory oversight. To address this, the government has proposed establishing guardrails to mitigate potential harms and whilst examples are limited, plans to test, enhance transparency and accountability, and clarify existing laws are among them.
The extent of risk will determine when and what guardrails will be applied. So, high-risk applications would likely face mandatory guidelines, while the use of AI in low-risk settings would be largely unimpeded–with the aim to boost adoption and innovation in this space.
Determining what is classified as ‘high-risk’ AI in the Australian context was noted as a priority. To this end, the government has announced an Expert Advisory Group to assist in the development of AI guardrails. At the same time, a voluntary AI Safety Standard is being developed with industry input to manage AI-related risks along with a voluntary watermarking and labelling scheme to signal AI-generated content.
The government’s approach doesn’t appear to emulate the EU’s dedicated Artificial Intelligence Act, but instead opts to leverage or strengthen existing regulatory provisions to target AI-specific harm wherever possible—for example, reforms to privacy and online safety regulations and the proposed Misinformation Bill.
Given the consultative approach signalled in the response, the doors remain open for stakeholders to have a say in the future of AI governance in Australia. Expect a lot more in this space!

Kieran Lindsay, CMT Research Officer