55 minutes and 36 seconds
Webpage Link: https://www.youtube.com/watch?v=JicpcYwQe3w
Audio Link (51.5 MB):
https://podqueue.fm/proxy/aGlK3J70FdF-N6LxnWDbow
Description (automatically extracted)
Ben Zhao, Session Closing Keynote, Open Repositories 2025 (OR2025) , Chicago, Illinois, USA, 15-18 June 2025
Citation: Zhao, B. (2025, Haziran 16). Dealing with Generative AI, Harms and Mitigation Techniques. Open Repositories 2025 (OR2025), Chicago, Illinois, USA. Zenodo. https://doi.org/10.5281/zenodo.15790708
License: Creative Commons Attribution 4.0 International
Summary: This keynote addresses two key questions: Are large language models (LLMs) the right interface for data and information access, and what harms do AI models pose to institutions like libraries today? He explains that current LLMs, while powerful, are fundamentally flawed pattern-matchers rather than true reasoning systems. New techniques like chain-of-thought processing, self-verification, and retrieval-augmented generation (RAG) offer partial improvements but rely on the same unreliable foundations. On the second question, Zhao highlights the growing problem of AI-driven web scraping, noting that most mitigation strategies offer limited protection. He concludes that today’s generative AI LLMs are fundamentally flawed, and while composition techniques offer limited improvement, meaningful progress will require new architectures built with better understanding and ethical data sourcing. In the meantime, AI-driven crawlers pose an immediate threat, with most conventional defences proving ineffective—leaving commercial, network-level blocking as one of the few viable mitigation strategies.
- Added on:
- July 7th, 2025 12:07 PM EDT
- Last modified on:
- July 7th, 2025 12:07 PM EDT