AI Technologies , Discriminative AI , Generative AI
Australia Proposes Mandatory Guardrails for High-Risk AI Use
Government Won't Regulate Most AI Deployments in Effort to Boost InnovationThe Australian government plans to adopt a risk-based approach to AI governance modeled after the European Union's Artificial Intelligence Act, which requires strict regulation on any uses of AI systems that could pose risks to national security, violate individual privacy or cause harm to society.
See Also: Securing the Data & AI Landscape with DSPM and DDR
The government in its interim response advocated for a balanced approach to regulating AI usage, proposing safety guardrails for high-risk AI implementations and allowing low-risk deployments to continue unimpeded to boost the economy and augment human productivity.
"In considering the right regulatory approach to implementing safety guardrails, the government's underlying aim will be to help ensure that the development and deployment of AI systems in Australia in legitimate but high-risk settings is safe and can be relied upon, while ensuring the use of AI in low-risk settings can continue to flourish largely unimpeded," the statement says.
The government launched a discussion paper in July 2023 to seek public feedback on the regulatory measures needed to govern the safe and responsible development and use of AI. More broadly, the agency wants to enhance public trust and confidence in AI, while considering that AI poses potentially harmful uses, the effects of which are difficult or impossible to reverse.
Mandatory Guardrails for High-Risk AI Use?
Federal Minister for Industry and Science Ed Husic said the feedback showed Australians want stronger protections in place to help manage the risks, and the government is considering mandatory guardrails for high-risk AI use through changes to existing laws or by enacting new AI-specific laws.
The government defines high-risk uses of AI as those that may have systemic, irreversible or perpetual impact on citizens or businesses, including deepfake imagery and videos, self-driving cars, AI-enabled robots for medical surgery, disinformation content, and the use of AI systems where algorithmic bias may give rise to discrimination against individuals based on their age, gender or affiliation.
Husic said mandatory guardrails to promote the safe design, development and deployment of AI systems will include mandatory safety testing of such systems, watermarking AI-generated content, labeling AI systems in use, and putting in place certifications for developers and deployers of AI systems.
The government in its interim response said it will devote more attention to "frontier" AI models that exceed the capacity of previous models and can generate new content quickly and easily. "This speed and scale is also driving concern that these systems, deployed for legitimate purposes that nonetheless can result in harm, should be subject to greater testing, transparency and oversight," it said.
The government found that at least 10 existing legislative frameworks require amendments to address rapid advancements in generative AI and govern risks such a AI bias, a lack of transparency about how and when AI systems are used, poor data quality, inaccuracies in model inputs and outputs, and model slippage over time.
Following the EU's Risk-Based Approach
The government's proposed AI governance regime mirrors the European Union's risk-based approach to artificial intelligence regulation. European Parliament negotiators and the council presidency in December reached a landmark agreement on how to regulate the use of AI systems that pose risks to society.
The European Council plans to regulate AI to prevent harm to society. High-impact, general-purpose AI models will be subjected to strict controls and in some cases, banned from the continent. AI systems presenting only limited risk to society will be subject to very light transparency obligations, and those that pose no risk will be exempt from regulations.
The European Council will ban general-purpose AI models used for cognitive behavioral manipulation, facial image scraping from websites, emotion recognition, and biometric categorization of individuals based on sexual orientation or religious beliefs. Developers and users of other high-risk AI models will need to honor transparency obligations, demonstrate that their systems comply with the requirements, and clarify the allocation of responsibilities and roles of the various actors in their value chains.
Organizations that use AI systems for banned applications will face fines up to 35 million euros or 7% of their global turnover. The EU's Artificial Intelligence Act also proposes a fine of 15 million euros or 3% of global turnover for violations and 7.5 million euros or 1.5% of global turnover for supplying incorrect information.
Government Urged to Act Quickly to Prevent Misuse
Toby Walsh, professor of AI at the University of New South Wales, said Australia took longer than needed to wake up to the risks associated with the use of AI. "The European Union has led the way in the regulation of AI. It started drafting regulation back in 2020. And we are still a year or so away before the EU AI Act comes into force. This emphasizes how far behind Australia is," he said.
"Australia's unacceptable delay in developing AI regulation represents both a missed chance for its domestic market and a lapse in establishing a reputation as an AI-friendly economy with a robust legal, institutional and technological infrastructure globally," said Nataliya Ilyushina, research fellow at Melbourne-based RMIT University's blockchain innovation hub.
The government took more than six months to publish its first response to the public consultation but must act quickly as those working on risks related to cybersecurity, misinformation, fairness and biases need more stringent regulations, she said.
"Not having enough regulation can lead to market failure where cybercrime and other risks that stifle business growth, lead to high costs and even harm individuals," she added.
The Australian Academy of Science called the government's interim response "a sensible first step" that avoids unnecessary burdens on the research and development sector and seeks to find a balance between innovation and competition and the need to safeguard privacy, security and online safety.
The AAS said the government's next steps should include the development of a national strategy for the uptake of AI in the science sector and the implementation of "open science" to ensure all can access research on AI systems training. Many scientific papers and peer-reviewed publications today are behind paywalls.
The Australian Academy of Technological Sciences and Engineering also welcomed the interim response but said the government must now move quickly to reduce the impact of high-risk AI implementations.
"There is an urgent need to move forward now, particularly in areas like enhanced misinformation laws, the establishment of an expert advisory group and mandatory guardrails for high-risk AI uses," said ATSE President Katherine Woodthorpe.