The Impact of AI and Large Language Models in Legal Education: Reimagining Assessment and Navigating the Elephant In the Room Steven Montagu-Cairns, University of Leeds In the wake of the Fourth Industrial Revolution, the landscape of legal education is rapidly evolving with the integration of Artificial Intelligence (AI) and Large Language Models (LLMs) like OpenAI’s GPT series. This presentation aims to shed light on the transformative potential of these tools, with a special emphasis on the alternative methods of assessment they enable and the benefits and risks they introduce. Steve will discuss novel assessment methods fostered by AI, such as automated case analysis, real-time legal research tasks and simulations that mimic court proceedings or client consultations. These methods not only offer students hands-on, practical experience but also promise more objective, consistent and immediate feedback. Furthermore, it is important to identify that the introduction of LLMs into the academic arena is not without concerns. LLMs have the potential to inadvertently promote superficial understanding if students overly rely on them. There is also the inherent risk of biases in the models and the potential for misuse in generating plagiaristic or misleading content. Finally, Steve will explore the broader implications of LLMs in legal education, such as how they might alter the instructor’s role, democratise access to legal information and reshape the very competencies we prioritise in budding legal professionals. Through this discourse, the intention of the talk is to equip listeners with a holistic understanding of the changing dynamics in legal education and inspire informed decisions in curriculum design and pedagogy. Removing the Lure of the Forbidden Fruit: A Lecturer’s Role in Facilitating Students’ use of Artificial Intelligence in Research in Line with Academic Best Practice Alicia Bates, University of Law It is no secret that artificial intelligence (AI) is being used by students when researching and writing their academic essays. However, many students are not using AI in a manner which is compatible with their university’s academic conduct policies. Whilst some students will act in this manner with wilful disregard to their university’s policies, other students do not fully understand the boundary between appropriate and inappropriate use of AI. This paper explores a lecturer’s role in supporting students’ use of AI in academic research. The paper argues that lecturers cannot shun AI and expect all students to steer away from AI. If students hear a blanket message of “do not use AI”, the risk of academic misconduct may increase. Instead, lecturers can take steps to guide students on how to use AI in line with academic best practice.
Made with FlippingBook HTML5