Many Australians don’t trust artificial intelligence (AI) and risk being overtaken by change, experts warn.

Stela Solar, newly installed head of the CSIRO’s National AI Centre, said on Monday Australians have a “higher benchmark of trust” that needs to be met.

“It comes with our natural leaning towards being a little bit more sceptical,” she told a conference.

Regulatory systems and business operations need to catch up to anything new.

She said the technology was covered by privacy, anti-discrimination and harassment frameworks, and corporate requirements, but AI was taking us into new areas that were not defined or able to be governed.

For example, recording text messages and conversations with a loved one could be used to create a “bot” to generate future chats.

Ms Solar said a consumer had the right to know how their data was being used in systems, algorithms and decision making, and companies needed to be able to show them.

“The intention is to create a framework for us to start navigating the unknown, while the legal standards and regulatory system catches up.”

Self-confessed “robotics enthusiast” Sue Keay works in the mining industry and is part of a push for Australia to develop its own industry.

“A healthy dose of FOMO (fear of missing out) in Australia around AI would actually be quite useful,” the chair of the Robotics Australia Group said.

“I really have grave concerns that we’re not very well prepared for this brave new world that is coming on fast.”

Ms Keay told the Governance Institute conference that change was “unstoppable” despite Australia being a slow adopter of these technologies.

She called for Australia to make it easier for ethics to be embedded in the design and build phase of new technologies.

A tool developed for a legal firm to identify which cases could be won quickly and make the most money, for example, found they should only take on cases for men, she said.

“With our judicial system, cases for men do tend to go through more quickly and the payouts are higher.

“That is not the fault of the AI developers, it is not the fault of the company who wants to maximise its profits, but what do they do with this information?”

Fortunately, the company took an ethical approach and decided they didn’t want the tool, preferring to guarantee they would represent women, she said.

But the ethical approach on AI remains voluntary – here and elsewhere.

 

Marion Rae
(Australian Associated Press)