Invited Talks

last modified: 08 Nov 2021

The Logic of Graph Neural Networks

Martin Grohe, RWTH Aachen University

Graph neural networks (GNNs) are a deep learning architecture for graph structured data that has developed into a method of choice for many graph learning problems in recent years. It is therefore important that we understand their power. One aspect of this is the expressiveness: which functions on graphs can be expressed by a GNN model? Surprisingly, this question has a precise answer in terms of logic and a combinatorial algorithm known as the Weisfeiler Leman algorithm. My talk will be a survey of recent results linking the expressiveness of GNNs to logical expressivity.

Bio: Martin Grohe is a Professor for Theoretical Computer Science at RWTH Aachen University. He received his Ph.D. in Mathematics at Freiburg University in 1994 and then spent a year as a visiting scholar at Stanford and the University of California at Santa Cruz. Before joining the Department of Computer Science of RWTH Aachen in 2012, he held positions at the University of Illinois at Chicago, the University of Edinburgh, and the Humboldt University at Berlin. His research interests are in theoretical computer science interpreted broadly, including logic, algorithms and complexity, graph theory, theoretical aspects of machine learning, and database theory.


Spatial and Physical Reasoning: From Angry Birds to Open World AI

Jochen Renz, Australian National University

Research on "naïve physics", qualitative reasoning, spatial and temporal reasoning has been an important part of KR for over 40 years. The distinctive feature of this type of reasoning is that the domains over which we are reasoning are typically infinite and continuous. This feature is of high practical importance as it is omnipresent when interacting with the real physical world. Future AI agents need these reasoning capabilities to perform their tasks in the physical world safely and reliably, just like humans. These agents will mostly have only partial information about their environment, for example, through noisy perception. Their action space will be continuous, and the exact outcome of their physical actions might be unknown in advance, which makes it very difficult to determine action sequences that solve a given physical task. In addition, the presence of entities in the environment with unknown properties will make these physical tasks even harder. To promote research in this important area, and to enable the required agent capabilities, we have established the Angry Birds AI Competition which covers the major challenges of interacting with the physical world in a simplified and controlled environment. After nine annual competitions, KR-based approaches vastly outperform learning-based approaches.
In this talk, I will present and motivate this area, summarise different approaches, future challenges, and how KR research can be integrated with other subfields of AI to make a real impact in this area.

Bio: Jochen Renz is a Professor for Artificial Intelligence at the Australian National University. He did his PhD at the University of Freiburg in 2000, followed by a postdoc at the University of Linkoping, and a Marie-Curie Fellowship at TU Vienna. In 2003 he moved to the National ICT Australia in Sydney and in 2006 to the Australian National University in Canberra where he was promoted to full professor in 2013. Jochen published his first KR paper in 1998 on theoretical foundations of spatial reasoning. In 2012, he organised the first Angry Birds AI Competition, which is now held annually as part of IJCAI. Jochen’s research has been focusing on theoretical and practical aspects of spatial and physical reasoning and on integrating it with other AI areas to solve challenging problems.


Great Moments in KR Talk:

Description Logic and OWL: A Tale of Discoveries, Design Choices, Challenges, and Lessons Learnt

Uli Sattler, University of Manchester

Description Logic and ontology languages, in particular OWL, have now been around for a long time, with an active research community developing a plethora of KR formalisms, reasoning tasks, algorithms, reasoners, and computational complexity results. The relationships with other formalisms are well-understood and a rich infrastructure has been developed. In this talk, I will discuss some of the advances, discoveries, and design choices that were made on this journey as well as some of the challenges faced and insights gained, with a focus on the use of Description Logic as the underpinning of OWL and the ensuing demands. The talk should be accessible to the broad KR community and give the audience a better understanding of DLs and OWL, and current developments.

Bio: Uli Sattler is a professor at the University of Manchester, working in logic-based knowledge representation, Description Logics, and ontology engineering. Together with colleagues, she designed the family of Description Logics underpinning ontology languages such as OWL and decision procedures for relevant reasoning problems of these logics. She has been working on a range of novel reasoning problems that are important for ontology engineering and usage such as module extraction and decomposition, entailment explanation, and mining axioms from data. Uli completed her PhD under the supervision of Franz Baader at RWTH Aachen and her habilitation at TU Dresden.


Reverse Engineering Human Cognitive Development: What Do We Start With, and How Do We Learn The Rest?

Joshua Tenenbaum, MIT

What would it take to build a machine that grows into intelligence the way a person does — that starts like a baby, and learns like a child!? AI researchers have long debated the relative value of building systems with strongly pre-specified knowledge representations versus learning representations from scratch, driven by data. However, in cognitive science, it is now widely accepted that the analogous “nature versus nurture?” question is a false choice: explaining the origins of human intelligence will most likely require both powerful learning mechanisms and a powerful foundation of built-in representational structure and inductive biases. I will talk about our efforts to build models of the starting state of the infant mind, as well as the learning algorithms that grow knowledge through early childhood and beyond. These models are expressed as probabilistic programs, defined on top of simulation engines that capture the basic dynamics of objects and agents interacting in space and time. Learning algorithms draw on techniques from program synthesis and probabilistic program induction. I will show how these models are beginning to capture core aspects of human cognition and cognitive development, in terms that can be useful for building more human like AI. I will also talk about some of the major outstanding challenges facing these and other models of human learning.

Bio: Josh Tenenbaum is Professor of Computational Cognitive Science at MIT in the Department of Brain and Cognitive Sciences, the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds and Machines (CBMM). He received his PhD from MIT in 1999, and taught at Stanford from 1999 to 2002. His long-term goal is to reverse-engineer intelligence in the human mind and brain, and use these insights to engineer more human-like machine intelligence. His current research focuses on the development of common sense in children and machines, the neural basis of common sense, and models of learning as Bayesian program synthesis. His work has been published in Science, Nature, PNAS, and many other leading journals, and recognized with awards at conferences in Cognitive Science, Computer Vision, Neural Information Processing Systems, Reinforcement Learning and Decision Making, and Robotics. He is the recipient of the Distinguished Scientific Award for Early Career Contributions in Psychology from the American Psychological Association (2008), the Troland Research Award from the National Academy of Sciences (2011), the Howard Crosby Warren Medal from the Society of Experimental Psychologists (2016), the R\&D Magazine Innovator of the Year award (2018), and a MacArthur Fellowship (2019). He is a fellow of the Cognitive Science Society, the Society for Experimental Psychologists, and a member of the American Academy of Arts and Sciences.


The Interactionist View of Reasoning for Explainable AI

Francesca Toni, Imperial College

The interactionist view to human reasoning states that "the normal conditions for the use of reasons are social and more specifically dialogic" and that "people use reasons as arguments in favor of new decisions or new beliefs" [1]. In this talk I will explore how this view can be naturally transferred onto machine reasoning to support the vision of explainable AI (XAI) as computational argumentation. I will briefly overview the field of XAI, which, while having been investigated for decades, has witnessed unprecedented growth in recent years, alongside AI itself. I will then show that computational argumentation can be used to provide a variety of explanations, including dialogic ones, for the outputs of a variety of AI methods, leveraging on computational argumentation's wide array of reasoning abstractions. In particular, following [2], I will overview the literature focusing on different types of explanation (intrinsic and post-hoc, which in turn may be approximate or complete), different forms of AI with which argumentation-based explanations are deployed, different forms of delivery of explanations (including, prominently, dialogic ones), and different argumentation frameworks they use from within the several proposed in computational argumentation. I will conclude by laying out a roadmap for future work within this exciting area.

[1] Mercier and Sperber: The Enigma of Reason - a New Theory of Human Understanding. Harvard University Press 2017
[2] Cyras, Rago, Albini, Baroni, Toni: Argumentative XAI: A Survey. IJCAI 2021.

Bio: Francesca Toni is Professor in Computational Logic and Royal Academy of Engineering/JP Morgan Research Chair on Argumentation-based Interactive Explainable AI at the Department of Computing, Imperial College London, United Kingdom, and the founder and leader of the CLArg (Computational Logic and Argumentation) research group. Her research interests lie within the broad area of Knowledge Representation and Reasoning in AI and Explainable AI, and in particular include Argumentation, Argument Mining, Logic-Based Multi-Agent Systems, Non-monotonic/Default/Defeasible Reasoning, Machine Learning. She graduated in Computing at the University of Pisa, Italy, and received her PhD in Computing from Imperial College London. She has coordinated two EU projects, received funding from EPSRC in the United Kingdom and the EU, and was previously awarded a Senior Research Fellowship from The Royal Academy of Engineering and the Leverhulme Trust. She is currently Technical Director of the ROAD2H EPSRC-funded project (road2h.org) and co-Director for theUnited KingdomRI Centre of Doctoral Training in AI for Healthcare (ai4health.io), and was founding co-director of the United KingdomRI the Centre of Doctoral Training in Safe and Trusted AI (safeandtrustedai.org). Also, she has recently been awarded an ERC Advanced grant on Argumentation-based Deep Interactive eXplanations (ADIX). She has published over 200 papers, is EurAI fellow, has co-chaired ICLP2015 (the 31st International Conference on Logic Programming) and KR 2018 (the 16th Conference on Principles of Knowledge Representation and Reasoning), is the current chair for COMMA 2022 (the 10th International Conference on Computational Models of Argument), corner editor on Argumentation for the Journal of Logic and Computation, in the editorial board of the Argument and Computation journal and the AI journal, and in the Board of Advisors for KR Inc. and Theory and Practice of Logic Programming.