A world run by robots is often a top concern when discussing artificial intelligence but the government is not worried about a takeover so much as a disinformation deluge.

As technology rapidly develops, it’s considering laws to ensure stronger AI protections, particularly in high-risk industries.

Generative AI, which creates text, images or other media by drawing from immense amounts of data, often taken from creators without consent, is in the spotlight because of the threat it poses to copyright and creative jobs.

While Industry Minister Ed Husic says he firmly believes in the value of AI and does not want to stifle innovation, the emerging technology presents a massive challenge government must confront.

“The biggest thing that concerns me around generative AI is just the huge explosion of synthetic data; the way generative AI can just create stuff you think is real and organically developed but it’s come out of a generative model,” he told reporters in Canberra on Wednesday.

“The big thing I’m concerned about is not that the robots take over but the disinformation does.”

Mr Husic warned generated media could get picked up and quickly distributed through social media which may evolve into something that triggers a government response.

“We all recognise the threats, the perils that present themselves if a government response is based on something that’s not legitimate,” he said.

The government’s interim response to industry consultation on the responsible use of AI, released on Wednesday, suggests introducing measures such as voluntary labelling and watermarking of material.

Safeguards are being considered in relation to high-risk critical industries such as water and electricity, health and law enforcement and could include regulating how products are tested before and after use, along with further transparency on design and data.

But talks are initially focused on developing a voluntary safety standard.

Mr Husic said it was critical to establish protections, noting the “days of self-regulation are gone”.

The interim response paper said while many uses of AI did not present risks requiring oversight, there were still significant concerns.

“Existing laws do no adequately prevent AI-facilitated harms before they occur and more work is needed to ensure there is an adequate response to harms after they occur,” the report said.

More than 500 groups responded to the discussion paper.

It was welcomed by the Australian Information Industry Association but it wants government to work with international frameworks to ensure the nation isn’t left behind.

A report from the association said 34 per cent of Australians were willing to trust AI but 71 per cent believed guardrails were needed.

Chief executive Simon Bush said government needed to take advantage of the growth of AI.

“The regulation of AI will be seen as a success by industry if it builds not only societal trust in the adoption and use of AI by its citizens and businesses but also that it fosters investment and growth in the Australian AI sector,” he said.

Opposition communications spokesman David Coleman warned the government’s plan did not go far enough and risked putting the nation behind while letting down content creators.

“There is a grave risk Australia will be left standing still when it comes to the effective management of what is the next great industrial revolution,” he said.

“(The government) has taken no action to protect Australian content from being plundered by AI without fair compensation or agreement.”

 

Kat Wong and Andrew Brown
(Australian Associated Press)