Workshop: Logical Normativity and Logical Reasoning in AI Systems
Does logic constrain reasoning, belief, or knowledge? Does the answer depend on whether we are primarily concerned with natural, human agents or artificial agents?
The aim of this workshop is to stimulate interdisciplinary discussion of questions concerning the relationship between deductive logic and the reasoning capacities of bounded and/or artificial agents. We are especially interested in the varieties of reasoning for natural or artificial agents, plausible logical norms and sanctions for natural or artificial agents, non-monotonic logics, inconsistency tolerance, belief revision, and the nature of content or information. The workshop will include two invited talks from different interdisciplinary perspectives, followed by a round table discussion about future directions for research in this area.
Program:
10:00-10:05 | Opening Remarks |
---|---|
10:05-11:20 | Sarit Kraus (computer science, Bar-Ilan) "Formal Models of Human Decision-making for Intelligent Systems that Interact Proficiently with Humans" This talk is also part of the AI Colloquium. |
11:30-12:45 | Mark Jago (philosophy, Nottingham) "The Problem of Rational Knowledge" |
13:00-14:00 | round table discussion |
Abstracts
Sarit Kraus, "Formal Models of Human Decision-making for Intelligent Systems that Interact Proficiently with Humans"
Automated intelligent agents that interact proficiently with people can be useful in supporting or replacing people in complex tasks. The inclusion of people presents novel problems for the design of automated agents’ strategies. People do not adhere to the optimal, monolithic strategies that can be derived analytically. Their behavior is affected by a multitude of social and psychological factors. In this talk we will discuss several formal models for predicting human decision-making to be used by intelligent agents. In particular, we will consider argumentation theories and game-theory models and discuss their accuracy in predicting human decision-making. We will demonstrate their use by intelligent agents that negotiate and argue with people and we will compare them with machine-learning models.
Mark Jago, "The Problem of Rational Knowledge"
Real-world agents do not know all consequences of what they know. But we are reluctant to say that a rational agent can fail to know some trivial consequence of what she knows. Since every consequence of what she knows can be reached via chains of trivial consequences of what she knows, we have a paradox. I argue that the problem cannot be dismissed easily, as some have attempted to do. Rather, a solution must give adequate weight to the normative requirements on rational agents’ epistemic states, without treating those agents as mathematically ideal reasoners. I’ll argue that agents can fail to know trivial consequences of what they know, but never determinately. Such cases are epistemic oversights on behalf of the agent in question, and the facts about epistemic oversights are always indeterminate facts.
Workshop organisers:
Human-centered Artificial Intelligence
The workshop is organized within Utrecht University focus area Human-centered Artificial Intelligence (HAI). HAI bundles the various AI-activities undertaken at Utrecht University. AI in Utrecht has a unique interdisciplinary profile that pervades various departments, including computer science, philosophy, linguistics and psychology.
- Start date and time
- -
- End date and time
- -
- Location
- Online in Teams
- Registration
Please register by sending an email to logicalnormativity@gmail.com.