Abstract
This presentation shares the University of Sussex Library’s ongoing, sometimes messy journey in building an AI literacy programme, one that’s been full of false starts, rewrites, experiments, and gradual breakthroughs. Like many libraries, we jumped in early, decided to lead from the front, and have been steadily reshaping our approach as we learn what actually works for students.
In the absence of any clear guidance or established models, we’ve been building the programme workshop by workshop. We started by mapping UNESCO’s AI Competency Framework to QAA guidance, which helped us translate big ideas about information literacy and AI capability into practical learning outcomes. This groundwork has guided the workshops we’re now refining—sessions that focus on prompting, searching, evaluating AI-generated information, and understanding accuracy, transparency, and responsible use.
Our first iteration included two workshops: Questioning and Prompting, introducing LLMs and core prompting skills, and Chatting and Searching, focused on AI-assisted literature searching. These early sessions revealed what resonated with students, what confused them, and what needed to be scrapped or reworked.
From there, we’ve started developing a three-level model—beginner, intermediate, and advanced to scaffold skills more intentionally. Students move from prompt basics to more complex, reflective work. We found that including brain-only, research and AI-assisted tasks helped students recognise when AI genuinely adds value and to help articulate when they need creative approaches or more reliable, controlled resources.
A key testing ground has been the Professional Skills for Law module. Here, students took on the role of trainee solicitors, used LLMs to draft legal advice, and verified AI outputs with traditional research. The mix of role play, critical checking, and ethical discussion boosted students’ confidence in evaluating AI-generated information and deepened their understanding of professional and ethical implications.
This presentation will share what we’ve tried, what we’ve thrown out, and what’s starting to stick. We’ll reflect on how collaborative design with academic staff is helping us shape workshops that move beyond technical know-how toward genuinely critical, reflective engagement with AI as part of broader academic and professional literacies.
References
Cai, L., Msafiri, M. M., & Kangwa, D. (2025). Exploring the impact of integrating AI tools in higher education using the Zone of Proximal Development. Education and Information Technologies, 30(6), 7191–7264.
Lo, L. S. (2023). The CLEAR path: A framework for enhancing information literacy through prompt engineering. The Journal of Academic Librarianship, 49(4), 102720.
Miao, F., Shiohira, K., & Lao, N. (2024). AI competency framework for students. UNESCO.
Tibau, M., Siqueira, S. W. M., & Nunes, B. P. (2024). ChatGPT for chatting and searching: Repurposing search behaviour. Library & Information Science Research, 46(4), 101331.
Shi, L., Liu, H., Wong, Y., Mujumdar, U., Zhang, D., Gwizdka, J., & Lease, M. (2025). Argumentative Experience: Reducing Confirmation Bias on Controversial Issues through LLM-Generated Multi-Persona Debates (No. arXiv:2412.04629). arXiv. https://doi.org/10.48550/arXiv.2412.04629
Sayeedi, M. F. A., Haque, M. S., Razzaque, Z. I., Robin, R. A., & Nawshin, S. (2025). Rethinking Search: A Study of University Students’ Perspectives on Using LLMs and Traditional Search Engines in Academic Problem Solving (No. arXiv:2510.17726). arXiv. https://doi.org/10.48550/arXiv.2510.17726