D4G Schedule

This is a preliminary schedule for the Deliberation4Good workshop.

8:45-9:15 Doors Open + Welcome Remarks (Georgi Karadzhov, Andreas Vlachos, Tom Stafford, Christine De Kock, Youmna Farag)

9:15-10:00 Keynote 1:  Prebunking: building resilience against misinformation at scale Jon Roozenbeek

Abstract: In recent years, “prebunking” (preemptive debunking) has taken flight as a means to confer resistance against misinformation in a variety of issue domains. In this talk, Jon Roozenbeek (Cambridge University) discusses the research behind prebunking and psychological “inoculation” interventions, focusing especially on games (such as Bad News and Go Viral) and videos. Roozenbeek will also discuss the results of a recently published study in Science Advances, in which he and his colleagues showed that showing prebunking videos as YouTube ads significantly improved YouTube users’ ability to identify manipulation techniques commonly used in misinformation. 

Bio:

Jon Roozenbeek is the British Academy Postdoctoral Fellow at the Department of Psychology at Cambridge University. His research focuses on misinformation, vaccine hesitancy, online extremism, and information warfare. As part of this work, he co-developed the award-winning fake news games Bad News, Harmony Square and Go Viral.  His doctoral dissertation (2020) examined media discourse in the “People’s Republics” of Donetsk and Luhansk in eastern Ukraine. He is currently writing two books with Cambridge University Press: the first about propaganda and information warfare during the Russo-Ukrainian war, and the second about the psychology of misinformation.

10:00-10:45 Project Talks 1
  • 10:15-10:30 Opening Up Minds, Paul Piwek
  • 10:30-10:45 Collaborative Human Detection of Deepfake Texts, Dongwon Lee
10:45-11:15 Coffe Break

11:15-12:00 Keynote 2: Personalised Longitudinal Natural Language Processing, Maria Liakata

Abstract: In most of the tasks and models that we have made great progress with in recent years, such as text classification and natural language inference, there isn’t a notion of time. However many of these tasks are sensitive to changes and temporality in real world data, especially when pertaining to individuals, their behaviour and their evolution over time. I will present our programme of work on personalised longitudinal natural language processing. This consists in developing natural language processing methods to: (1) represent individuals over time from their language and other heterogenous and multi-modal content (2) capture changes in individuals’ behaviour over time (3) generate and evaluate synthetic data from individuals’ content over time (4) summarise the progress of an individual over time, incorporating information about changes. I will discuss progress and challenges this far as well as the implications of this programme of work for downstream tasks such as mental health monitoring.

Short bio: 

Maria Liakata is Professor in Natural Language Processing (NLP) at the School of Electronic Engineering and Computer Science, Queen Mary University of London and Honorary Professor at the Department of Computer Science, University of Warwick. She holds a UKRI/EPSRC Turing AI fellowship (2020-2025) on Creating time sensitive sensors from user-generated language and heterogeneous content. The research in this fellowship involves developing new methods for NLP and multi-modal data to allow the creation of longitudinal personalized language monitoring. She is also the PI of projects on language sensing for dementia monitoring & diagnosis, opinion summarisation and rumour verification from social media. At the Alan Turing Institute she founded and co-leads the NLP and data science for mental health special interest groups. She has published over 150 papers on topics including sentiment analysis, semantics, summarisation, rumour verification, resources and evaluation and biomedical NLP. She is action editor for the ACL rolling review and regularly holds senior roles in conference and workshop organisation. 

12:00-13:00 Lunch + Networking

13:00-13:45 Keynote 3: Scaling up interactive argumentation, Sacha Altay

Abstract: In many domains there is a gap between public opinion and the scientific consensus. People overestimate the risk of vaccines, nuclear energy or genetically modified organisms. Providing people with good arguments and exposing them to the consensus can help, but mass persuasion is notoriously hard and only has minimal effects. When people discuss with peers or experts they trust, they appear more open to changing their minds. Yet, the interactive nature of discussion makes it difficult to scale up. We tested the effectiveness of two chatbots emulating important features of discussion, such as its interactivity. After presenting the results of two experiments, I will suggest that we may need to rethink the way we frame the fight against false beliefs. In particular, I will argue that focusing on improving the acceptance of reliable information and making it more attractive may be more effective at reducing false beliefs than focusing on misinformation.

Bio: Sacha is a postdoctoral research fellow working on misinformation and (mis)trust in the news at the University of Oxford (Reuters Institute for the Study of Journalism). He holds a PhD in experimental psychology from the École Normale Supérieure in Paris. During his PhD he tested novel communication techniques to inform people efficiently and correct common misperceptions about vaccines, GMOs, or nuclear energy.

13:45-14:30 Keynote 4: Automated Generation of Pedagogical Interventions in Dialogue-based Intelligent Tutoring Systems, Ekaterina Kochmar

Abstract: Despite artificial intelligence (AI) having transformed major aspects of our society, only a fraction of its potential has been explored, let alone deployed, for education. AI-powered learning can provide millions of learners with personalised, active and practical learning experiences, which are key to successful learning. This is especially relevant in the context of online learning platforms. In this talk, I will present the Korbit Intelligent Tutoring System (ITS), which provides a dialogue-based learning experience online. Specifically, Korbit relies on the Socratic tutoring method: presentation of learning material is followed by a series of questions and pedagogical interventions from the AI-tutor aimed at promoting deliberation, guiding the learner and setting the line of reasoning. I will describe the AI approaches that we use to provide learners with highly personalised and active learning experiences via dialogue. Finally, I will present and discuss the results of a comparative head-to-head study on learning outcomes for Korbit and a popular MOOC platform that follows a traditional model delivering content using lecture videos and multiple-choice quizzes. Our experiments demonstrate a significant increase in learning outcomes, with students on the Korbit platform showing higher course completion rates and achieving learning gains 2 to 2.5 times higher than students on an online platform who do not receive dialogue-based feedback. These results highlight the tremendous impact that can be achieved with a dialogue-based AI-powered system, making high-quality learning experiences available to millions of learners around the world.

Bio: Ekaterina Kochmar is a Lecturer at the Department of Computer Science of the University of Bath, where she is part of the AI research group. She conducts research at the intersection of artificial intelligence, natural language processing and intelligent tutoring systems. She is the also the President of the Special Interest Group on Building Educational Applications (SIGEDU) of the Association for Computational Linguistics.

Prior to that, she has been working as a post-doctoral researcher at the ALTA (Automated Language Teaching and Assessment) Institute, University of Cambridge, focusing on the development of educational applications for second language learners. Her research contributed to the building of Read & Improve, a readability tool for non-native readers of English. 

She is also a co-founder and the chief scientific officer of Korbit AI, focusing on building an AI-powered dialogue-based tutoring system capable of providing learners with high-quality, interactive and personalised education in STEM subjects.

14:30-15:00 Project Talks 2

DEliberation Enhancing Bots, Georgi Karadzhov

Next generation solutions to disinformation: group dynamics as the new frontier., Ruurd Oosterwoud

15:00-15:30 Coffe Break

15:30-16:15 Keynote 5: Interactional dynamics in deliberative dialogues, Shauna Concannon

To consider how technology might support humans to deliberate more effectively, it is necessary to understand the interactional dynamics that underpin deliberative dialogues. Negotiating oppositional perspectives in conversation requires subtle linguistic and pragmatics skills. In this talk, I will begin by presenting previous work on deliberation in human-human dialogues, including experiments that test how different pragmatic strategies relating to politeness and speaker certainty impact on the resulting deliberation. I will then discuss more recent work, which applies an interactional linguistics perspective to the study of human-chatbot interactions and explore the potential implications for the use of chatbots in deliberative contexts. 

Bio:

Shauna is an interdisciplinary researcher, combining approaches from linguistics, psychology and human-computer interaction to study the communication of information in online discussions and human-chatbot interactions. They are an assistant professor in Computer Science and Digital Humanities at Durham University. Prior to this they worked on the Giving Voice to Digital Technologies project at the University of Cambridge and completed a PhD on disagreement in dialogue at Queen Mary, University of London.  

16:15-17:00: Poster and Demo Session

19:00 Dinner and drinks: Maypole pub (location)