material, particularly when AI is used to create video content, AI can produce inaccurate or fabricated information known as “hallucinations.” For example, a state court judge recently issued a case opinion that included fictitious cases obtained using AI. To mitigate this risk, municipal staff must always verify AI outputs, cite official sources, and understand that they are responsible for their work. An additional AI risk includes embedded biases in AI responses to queries. These biases may reflect: (1) training data bias where the response is skewed by the data used to train the model; (2) cultural bias where the response favors western norms; (3) gender bias where stereotypes are reinforced (e.g., male doctors, female teachers); (4) racial and ethnic bias where responses underrepresent diverse perspectives; (5) confirmation bias that aligns responses with user assumptions; (6) political bias that favors certain ideologies; (7) socioeconomic bias that reflects more affluent societies; (8) accessibility bias that excludes less mainstream sources; (9) language bias where AI performs better in English; and (10) recency bias that favors more recent information. To reduce bias, staff must always check outputs, consider diverse viewpoints, and use AI platforms trained on inclusive data sets such as ChaptGPT and Copilot. Used responsibly, AI can significantly enhance municipal operations. During AI training, local government employees often raise concern that AI might replace their government positions. Economist Richard Baldwin argues, “AI won’t take your job, it’s somebody using AI who will take your job.”This resonates with local government employees because they understand the need to continue innovating at a time of tight financial resources. Municipal AI Use Policy Development Cities and villages should establish a policy governing employee use of generative AI that includes essential elements to ensure the employee understands how to effectively use generative AI while minimizing the risk of inappropriate disclosure of confidential information. Cities and villages should consider requiring employees to be trained in AI usage before they are allowed to utilize tools like ChatGPT or Copilot. Municipal AI policies should define what constitutes “permissible use of AI” at work. This may involve using AI to enhance writing quality when drafting grant applications, meeting minutes, policies, citizen surveys, presentations, press releases, and position papers, as well as researching, analyzing, or summarizing non-sensitive municipal documents and reports. The policy should inform municipal employees that they are fully responsible for their work, including when using generative AI, and that they will be held accountable for any incorrect facts or citations in municipal documents. In short, the policy should make clear municipal employees cannot use AI as a justification for errors.
more complex applications, moving from quick edits to structured analyses and decision support. With familiarity comes broader impact, extending from day-to-day tasks to larger projects across departments. AI can also help municipalities secure more financial resources by preparing and reviewing draft grant applications, especially those that busy staff previously lacked the time to complete. Additionally, AI-powered chatbots can enhance municipal services by providing residents with accurate, timely information without requiring staff involvement. AI translation tools can help cities and villages better serve diverse populations. General AI applications are expanding rapidly, even beyond generative AI. Municipalities are using AI to review and summarize law-enforcement body-cam footage, streamline public-records requests, issue automated emergency alerts, transcribe and analyze meetings, optimize traffic signals, and conduct property assessments using 3D imaging, among many other functions. Identifying and Mitigating AI Risks Despite AI’s many benefits, it also introduces important risks, particularly around data security. Information entered into AI platforms may be retained and used for training, potentially publicly exposing sensitive data. Municipal staff must be trained to never put confidential information into AI prompts. To address this important privacy concern, UW-Madison’s Division of Extension uses an enterprise version of Microsoft Copilot that ensures university data remains private and under university control. While enterprise solutions can be costly, they are often a worthwhile investment to protect sensitive information such as human resources records, health data, and law enforcement investigation details. Another significant AI risk is overreliance on AI-generated content. In addition to the unintentional use of copyrighted Bill Oemichen teaching AI to Bayfield County employees. Photo credit: Kelly Westlund, Deputy Bayfield County Administrator.
The Municipality - October 2025 | 7
Made with FlippingBook - professional solution for displaying marketing and sales documents online