Responsible AI Hoo-Ha

There has recently been a big hoo-ha about ‘Responsible AI’ and regulation of AI.

Governments and their business consultants have been busy churning out policies & papers.

I’ve got mixed feelings about the whole thing:

On one hand, it is certainly possible for unscrupulous players to use AI in an irresponsible manner. On the other, maybe we (Australia) should be more concerned in coming up with a coherent AI strategy first, before we worry about regulation & responsible use.

Or at least do the two things in parallel…

Furthermore, I’m much more worried about how some nutcase tyrannical despot might be using AI, rather than my local council.

Australian AI Direction

It’s not really clear where we are heading, but thinking seems to fall into several camps:

  • we’ll just use OpenAI, Anthropic, Alphabet, Meta etc…“. This means we have just superficial control over how AI is applied, but we are essentially at the mercy of others.
  • we’ll veto any use of AI” (refer ACTU)
    Can be considered the “stick your head in the sand” approach. It means others will work better/quicker/cheaper. We are kidding ourselves to think this will save jobs or be successful long term.
  • “we’ll build our own”
    This aligns most closely with my thinking. However, being realistic: a small country like Australia will never be even close to self-sufficient in AI. So I think it is important to have a highly targeted & focussed approach: don’t waste effort on unlikely wins, concentrate where we have a good chance of success, or where we have a degree of control.

This ties into the issue of Sovereign AI, which I’ve covered in this post.
Summary: Sovereign AI goes a lot deeper than just an Aussie LLM. There is more to AI than LLMs.


Take it or leave it

If Australia comes up with AI regulation that does not suit the international AI powerhouses, their likely response will be “take it or leave it“. I can’t imagine Meta changing much to suit a far-flung antipodean island with a paltry ~27M people.

If we are just buying AI from elsewhere & it is essentially a black-box service, then any Responsible AI (RAI) requirements or AI regulation seems futile, since we are limited to just whether to use it or how to use it.

It might make sense to partially piggy back on another bloc (like the Europeans) with a greater critical mass, rather coming up with our own.

Red tape nightmare

My concern is that

  • we introduce a whole layer of regulation that burdens legitimate organisations which already have some constraints, even if imperfect. This includes existing legislation, management & shareholder ethics, personnel/whistleblower protections & so on. These constraints might not be specific to AI, but are better than nothing.

yet:

  • ‘bad actors’ who don’t give a rats about AI regulation are free to apply AI for their purposes

Bad actors

I don’t mean Gwyneth Paltrow, more the Kim-Jong-Il type. The risk is how these ‘bad actors’ will use (& undoubtably already are using) Artificial intelligence. This includes:

  • terrorists
  • rogue states
  • criminals & scammers

I acknowledge it is difficult to differentiate good & uses of AI since the the same thing could be used for quite innocent or for diabolically nefarious purposes. These purposes could include anything from bioweapons to cyber-attacks to espionage to political manipulation.

Limiting the use of AI for potentially dangerous or even catastrophic purposes is obviously something Australia can’t control by ourselves- it relies on international cooperation.

Summing up

I’m not arguing we don’t need Responsible AI or regulation- it is certainly important.
However, it can’t be meaningfully applied in isolation, particularly if we have minimal control & influence over the AI we choose to use.

Other priorities include:

  • developing national AI strategy
  • determining what we can & can’t realistically do
  • determining what we can & can’t realistically control (& where regulation is futile)
  • preventing or reducing AI falling into wrong hands, or at least mitigating affects