NAVIGATING AI’S MORAL COMPASS Much like the internet in the early 2000s, AI has quietly moved from the margins to the mainstream, transforming how businesses operate. The issue now, as Sanjog Misra at the University of Chicago Booth School of Business points out, is how management education can best prepare future leaders to use this technology responsibly A s someone who researches and teaches coursework on applied AI, I see up close both the associated excitement and the unease. The technology promises
Summarising articles, generating code, even writing first drafts – these tasks are now only one prompt away. Rather than ban such tools, I encourage students to use them. But crucially, I ask them to critique their outputs. Did the algorithm capture the nuances of the task and the real- world context? What did it miss? Where might it reflect hidden biases? In one class, I asked students to use AI to summarise a business case study, then compare its version with their own summary. The point wasn’t to see who wrote it better, but to discover what AI had missed or misinterpreted and to unpack why that mattered. This shift means that as faculty, our role is evolving. It’s less about delivering information and setting tasks focused on summarisation and more about creating space for reflection, dialogue and ethical exploration. We’re not just teaching technical fluency; we’re helping students build the judgment needed to navigate complexity in real time. At Chicago Booth, our curricula has already evolved to meet this challenge with our new pathway in Applied AI within the MBA programme. This isn’t just about learning to use algorithms – it’s about building a rigorous framework for understanding the technology’s impact on strategy, operations and society. This specialisation blends technical and analytical training with opportunities to examine the ethical and organisational implications of AI in real-world settings. It reflects a broader shift across business schools: preparing future leaders not just to use AI, but to do so with discernment. Chicago Booth has built an ecosystem that embeds AI into every layer of the student experience. In addition to the formal pathway, the Centre for Applied Artificial Intelligence plays a critical role – hosting speaking series with industry leaders, supporting hands-on research and helping students apply AI in real-world business contexts. Importantly, AI is not siloed at our school; students encounter its implications across disciplines, from marketing strategy to asset management. This integrated approach reflects Booth’s beliefs that AI is not just a topic – it’s a lens for innovation, one that demands both technical fluency and a clear-eyed view of its social and ethical dimensions across all sectors and areas of study. In today’s world, fluency in AI isn’t optional and values-based decision-making has never mattered more.
incredible gains in productivity, personalisation and problem-solving. However, it also raises tough questions about bias, privacy, accountability and fairness – and understanding these ethical dimensions is just as important as mastering the technology itself. The three pillars of AI education In our AI curriculum we focus on three pillars: technology, tools and thinking. The first two focus on the mechanics, ie how AI works and what it enables. The third – thinking – is about mindset, stepping back to ask the hard questions about impact, fairness and responsibility. This kind of critical thinking isn’t an optional add-on; it’s a core part of learning how to lead in an AI-driven world. In class, we encourage students to interrogate broader implications in AI practice. For example, we explored how California’s under-utilised food stamp programme (SNAP) used AI to send personalised nudges that pushed enrolment renewal, resulting in increased uptake of the programme through a recertification rate increase of more than 20 per cent. Our study went beyond mere numbers, with students discussing the privacy concerns and ethical trade-offs involved in targeting SNAP recipients through messaging. They looked at whether this application of technology risks crossing the line from social support into surveillance. These conversations push students to think critically about the outcomes and impacts on personal wellbeing – not just quantitative output. These aren’t simply theoretical questions – they’re central to how AI will shape the world. We want students to graduate not only knowing what AI can do; they should also be capable of questioning what AI should do. Faculty must adapt first Before we talk about what students need, it’s worth asking how AI is changing the educators. Tools such as ChatGPT and other large language models have completely shifted how students approach assignments.
44 Ambition • ISSUE 5 • 2025
Made with FlippingBook - Share PDF online