10
judgments.” By keeping the disciplinary structure consistent, we eliminated ambiguity and reinforced that AI use is held to the same professional standards as other core responsibilities. We also learned that hard-coding specific tools into the policy didn’t work. The pace of AI software evolution made that approach impossible to maintain. We replaced it with a dynamic list of approved tools that live outside the main document and are revised regularly. This gave us flexibility while maintaining oversight and consistency. Data privacy was a foundational pillar. We clearly defined what information could never be shared with AI systems depending on whether it is a public LLM or a secure internal environment, while also distinguishing the difference between the two. We cautioned teams that while public AI tools are easy to access, they often harvest your inputs to improve their models, so using unauthorized chatbots can expose sensitive company data. Prompts themselves were treated as intellectual property, protected just like code, specifications, and client deliverables. Underlying all of this remains one core principle: human judgment remains essential. Our policy makes it clear that AI is a support tool, not a replacement for expertise. Every AI-assisted output must be reviewed for accuracy, bias, and alignment with our values, ensuring the work we deliver remains thoughtful, ethical, and high quality. FOLLOW THROUGH. A rigid policy can stifle creativity, while a vague one invites risk. We invest time in finding language that sets clear expectations and still encourages thoughtful exploration. Teams are invited to evaluate new workflows within safe boundaries, so innovation flourishes without compromising data-security, oversight, or quality. We hope these insights help you as you develop your own policy. We are still refining our own as a work-in-progress document – identifying gaps, rolling out additional training to ensure alignment across offices, and preparing to revise our guidelines as both market tools and our in-house solutions evolve. If you are debating whether to start, do not wait. You do not need perfection on day one. Give your team a structure for responsible AI use, gather feedback, and iterate. As technology matures, your plain-language instructions will evolve into automated workflows that drive geometry, populate schedules, and produce client-ready deliverables. Decide early where AI adds speed and insight, and where human intuition remains non-negotiable. Teach emerging staff the reasoning behind these choices so they build the judgment that senior reviewers rely on during QAQC. The richer the context you supply, the more accurate, personal, and valuable the AI’s response becomes, keeping you firmly in control while extending your reach throughout every project phase. Erik Stroemberg is an associate director and BIM manager at HLB Lighting. Connect with him on LinkedIn.
ERIK STROEMBERG , from page 9
on risks, opportunities, and practical needs. Through broad engagement and insights we gathered, we shaped a policy that anticipated how teams might work while leaving room to refine and expand as new tools emerged. We also ensured that the scope of the policy applied not only to chatbot interactions but also to AI image generation and AI meeting note takers, covering internal and external communications. Finally, before diving into specific tools or technologies, we made sure our AI team understood the current capabilities of AI and its rapid trajectory. We then organized collaborative workshops and pilot programs, carefully selecting participants who were quick learners and natural cheerleaders for the initiative. These sessions combined workflow mapping with the development and testing of well-thought-through prompts, helping us outline a solid strategy. We ensured that valuable data was collected throughout these pilots, measuring time taken, effort level, and quality compared to baseline methodologies. Through this hands-on exploration we identified where AI could boost speed and innovation and where we needed to keep human expertise firmly at the core of decision-making and quality control. These exercises gave us more context and insights into future revisions to the policy. “We invest time in finding language that sets clear expectations and still encourages thoughtful exploration. Teams are invited to evaluate new workflows within safe boundaries, so innovation flourishes without compromising data-security, oversight, or quality.” IMPLEMENTATION. We quickly discovered that an instructional PDF document doesn’t drive change. Real implementation came only when we committed to onboarding, ongoing education, demonstrating approved use cases, and creating space for discussion. That integration of policy into everyday practice made a meaningful difference. It wasn’t about getting signatures on policy documents; it was about building a shared understanding of the reasoning behind each of the provisions. When it came to AI-generated content, we communicated a clear analogy: treat outputs like junior staff work. Even when something looked polished, it could contain hidden inaccuracies, hallucinations, bias, or missing context. To protect quality and maintain trust, we implemented independent review processes for AI-assisted work regardless of the user’s experience level. One important decision during the development of our AI policy was aligning the consequences of misuse with our existing IT and HR policies. The policy also complements these documents; for example, it states that “AI should not be the primary decision-maker for any employment-related
© Copyright 2025. Zweig Group. All rights reserved.
THE ZWEIG LETTER OCTOBER 6, 2025, ISSUE 1604
Made with FlippingBook flipbook maker