Stirling embraces AI – with limits

ARTIFICIAL INTELLIGENCE is being embraced at Stirling council, with councillors and staff allowed to “leverage AI for tedious tasks” so they can “focus on impactful customer-centric activities”.

This week the council passed a “Generative AI policy”, referring to the programs that can generate humanesque responses to questions.

With approval from the CEO, councillors will be allowed to use publicly available AI platforms “for completion of their official duties”, and staff will be able to use approved, licensed AI for some duties as long as it’s double-checked by humans for accuracy. 

Stirling recently rolled out Bing Chat Enterprise, a commercial AI chat program described in a staff report as “a secure AI powered platform designed to revolutionise the way the City approaches work related tasks”. 

And back in July Stirling started using a chatbot called “Ainsley” to conduct community consultation and offer information about upcoming community infrastructure plans (“Posts start talking back,” Voice, July 29, 2023).

• What Bing’s Image Creator thinks the civic centre will look like when AI is embraced by the City of Stirling (or the “City of Sliigling” – this is why you have to doublecheck the AI’s work).

The new policy carries some caveats for using such programs:

• Users must appropriately disclose the use of generative AI in generating information, assisting with decision making or producing communications;

• Results of all generative AI tools must be verified by a person before use or communication;

• Results of all generative AI tools must be explainable and transparent in use; and,

• They’re only allowed to use programs cleared by their digital security techies, and the policy says free publicly available programs “must not be used where services will be delivered, or decisions will be made”.

The current generation of chatbots became widely available in November 2022, after the AI programs were exposed to large bodies of text and then trained by humans to give plausible answers.

Stirling’s policy insists AI’s work be double-checked by a human after various AI glitches around the globe: 

• Several lawyers who used AI to research legal arguments were exposed after the bots cited cases that never existed, or invented fictional quotes from judges. 

• Earlier this month four Australian academics apologised to consultancy company KPMG for their submission to a parliamentary inquiry that made several false accusations against the company, including that KPMG was involved in something called the “7-Eleven wage theft scandal”. The scandal never happened; it was invented by Google’s Bard AI tool.

In a separate item presented to council this week, councillors also approved adding AI issues as a new category of “strategic risk” to be taken into account when making decisions.

Strategic risks are those affecting the City’s overall strategies and long-term vision.

The staff report says: “Whilst AI promises great benefits, it also raises many concerns around privacy, security and safety… in the evolving landscape of AI, it is imperative that the City also considers the risks associated with AI.”

A report on AI risks will go to the next audit committee meeting.

by DAVID BELL

Posted in

Leave a comment