Invited speakers (confirmed)

Speaker: Christoph Benzmüller (Freie Universität Berlin, Germany)

Title: Ethico-legal governance of intelligent artificial agents — Can post-hoc normative reasoning competencies prevent AI systems from going rogue?

I will argue for the development of ethico-legal governors to assess, justify and legitimate options for critical actions of an intelligent artificial agent (ideally) before action execution is granted. Such governor technology (which bears some similarities to the slow “System 2” as described by Kahneman) calls for the provision of deliberative legal and moral reasoning competencies in intelligent artificial agents.
My research is based on the following assumptions:
(i) There exists critical applications in which the naive deployment of modern AI technology could cause significant damage or harm. It is of societal interest to invest in preventive measures.
(ii) AI technology that is solely based on machine learning technology (which bears some similarities to the fast “System 1” as described by Kahneman) appears incapable of developing reliable and robust moral reasoning competency that is well aligned with humanitarian norms and values.
(iii) The combination of machine learning based AI and declarative, logic-based approaches, in contrast, has the potential to provide a convincing solution, in particular, if both the subsymbolic and the symbolic reasoning layers are brought into fruitfully interaction.
My current research therefore focuses on the provision of flexible and expressive symbolic means to represent and reason with normative theories. To address this challenge we have developed the LogiKEy formal framework, methodology, and associated tool support (joint work with colleagues from U Luxembourg). LogiKEy supports the design and engineering of ethical reasoners, normative theories and deontic logics in a most flexible way, and it also provides a fruitful link between different research communities, including knowledge representation and reasoning in AI, the deduction systems community and formal ethics. 
– Christoph Benzmüller, Xavier Parent, Leendert van der Torre. Designing Normative Theories of Ethical Reasoning: Formal Framework, Methodology, and Tool Support. 2019.
– Daniel Kahneman. Thinking, Fast and Slow. 2012

Bio: Christoph Benzmüller is a professor in Artificial Intelligence/Computer Science and Mathematics at Freie Universität Berlin, Germany. He is also a visiting scholar of the University of Luxembourg. Christoph’s prior research institutions include the universities of Stanford (USA), Cambridge, Birmingham, Edinburgh (all UK), the Saarland (Germany) and CMU (USA). Christoph received his PhD (1999) and his Habilitation (2007) from Saarland University; his PhD research was partly conducted at CMU. In 2012, Christoph was awarded with a Heisenberg Research Fellowship of the German National Research Foundation (DFG).

The research activities of Christoph are interfacing the areas of artificial intelligence, philosophy, mathematics, computer science, and natural language. Many of these activities draw on classical higher-order logic (HOL). Christoph has contributed to the semantics and proof theory of HOL, and together with colleagues and students he has developed the Leo theorem provers for HOL. More recently he has been utilising HOL as a universal meta-logic to automate various non-classical logics in topical application areas, including machine ethics & machine law (responsible AI), rational argumentation, metaphysics and category theory.


Speaker: Marc van Zee (Google AI, the Netherlands)

Title: Measuring Compositional Generalization

Abstract: Machine learning has made tremendous progress in the last decades in fields such as vision, language, and speech, often replacing domain-specific, hand-tuned systems with general-purpose, domain-independent architectures. In this talk I discuss two important challenges for machine learning. First, I try to convey that state-of-the-art machine learning methods exhibit limited compositional generalization. At the same time, there is a lack of realistic benchmarks that comprehensively measure this ability, which makes it challenging to find and evaluate improvements. I present a method to systematically construct such benchmarks, and a large and realistic natural language question answering dataset that we have developed recently [1]. Secondly, I discuss some limitations of the so-called “single loss function perspective” employed in much machine learning research, and discuss how I believe it relates to human (social) intelligence.

[1] Daniel Keysers, Nathanael Schärli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin, Nikola Momchev, Danila Sinopalnikov, Lukasz Stafiniak, Tibor Tihon, Dmitry Tsarkov, Xiao Wang, Marc van Zee, and Olivier Bousquet. Measuring Compositional Generalization: A Comprehensive Method on Realistic Data in ICLR2020.

Bio: Marc van Zee did his undergraduate studies at the Eindhoven University of Technology (BSc. Industrial Design, honours programme) and Utrecht University (MSc. Technical Artificial Intelligence, cum laude). He wrote his master’s thesis at Linköping University in Sweden with Patrick Doherty on temporal action logics for Unmanned Aerial Vehicles. Marc did his PhD at the University of Luxembourg, working on logics for intention, argumentation, and reinforcement learning. After completing his PhD with distinction in early 2017, Marc joined Google Brain, first in Zurich and now in Amsterdam. Marc’s current research interest is combining machine learning architectures with structured knowledge representation techniques in order to improve their ability to do compositional generalization.

Speaker: Fei Wu (Zhejiang University, China)

Title: Big data intelligence: from correlation discovery to casual reasoning

Abstract: The discovery of correlations from large scale of data set is an interesting issue nowadays. Artificial intelligence is now heading towards how to integrate data-driven learning and knowledge-guided inference to perform better reasoning and decision instead of correlation learning via metric matching. This talk will discuss the potential ways to fuse symbolic AI, data-driven learning and reinforcement learning to support causal reasoning.

Bio: Fei Wu received his B.Sc., M.Sc. and Ph.D. degrees in computer science from Lanzhou University, University of Macau and Zhejiang University in 1996, 1999 and 2002 respectively. From October, 2009 to August 2010, Fei Wu was a visiting scholar at Prof. Bin Yu’s group, University of California, Berkeley. Currently, He is a Qiushi distinguished professor of Zhejiang University at the college of computer science. He is the vice-dean of college of computer science, and the director of Institute of Artificial Intelligence of Zhejiang University. He is currently the Associate Editor of Multimedia System, the editorial members of Frontiers of Information Technology & Electronic Engineering. He has won various honors such as the Award of National Science Fund for Distinguished Young Scholars of China (2016). His research interests mainly include Artificial Intelligence, Multimedia Analysis and Retrieval and Machine Learning.


Speaker: Dag Westerståhl (Stockholm University, Sweden)

Title: Notes on Compositionality: what, why, and how? (joint work with Alexandru Baltag and Johan van Benthem)

Abstract: Photo of Dag WesterståhlThe principle of compositionality — that the meaning of a complex syntactic expression is determined by the meanings of its parts and the mode of syntactic composition — plays an important role in logic, linguistics, computer science, and cognitive science. Yet its status is still debated; papers and books keep being written about it. Is it a refutable empirical claim, or a methodological design principle, or a bit of both? This paper adds some concrete considerations to these debates, starting from a number of case studies, in which seemingly non-compositional phenomena have been successfully treated in a compositional manner. Among the examples are formal semantic accounts of the intensional phenomena first brought up by Frege, and Hodges’ team semantics for Hintikka’s allegedly non-compositional IF logic. What exactly are the criteria of success here? Are there common patterns to these compositional ‘solutions’? Are the solutions in some sense unique? We shall make no grand new claims, but ask some questions that hopefully make it clearer what is at stake.

Bio: Dag Westerståhl is a Professor of Theoretical Philosophy at Stockholm University and currently Jin Yuelin Professor of Logic at Tsinghua University, Beijing. He got his PhD with Per Lindström at the University of Gothenburg, and is a member of the Royal Swedish Academy of Sciences. His main research interests are generalized quantifiers, formal semantics, finite model theory, and philosophy of language. Together with Stanley Peters he is the author of the book Quantifiers in Language and Logic (OUP, 500 pp., 2006). Among his recent works are papers on the semantics of possessive and exceptive constructions in English (with Stanley Peters), and on logical consequence and logical constants (with Denis Bonnay). He is currently writing a book on compositionality together with Peter Pagin.