Looking back on the workshop "Logical Normativity and Logical Reasoning in AI Systems"

Kickoff workshop fruitful start of new HAI research collaboration

Girl standing in projection of light.
Photo: Mahdis Mousavi (Unsplash)

The kickoff workshop of the "Logical Normativity and Logical Reasoning in AI Systems" group took place online, November 2nd, 2020. This event formally marked the opening of an interdisciplinary collaboration between Natasha Alechina (computer science), Colin Caret (philosophy), and Erik Stei (philosophy). The organizers are grateful for funding from the HAI focus area, which they expect to use for a physical conference as soon as it becomes possible.

The online workshop featured invited talks by Sarit Kraus (Bar-Ilan, computer science) and Mark Jago (Nottingham, philosophy). The talks drew a sizable audience comprising internal members of the focus area as well as external researchers from logic, philosophy, and computer science. Recordings of the talks are available through Microsoft Stream (to watch the video, you have to be a member of the Human-centered AI Team in Teams. Please request to join via this link or email us at hai@uu.nl).

Roundtable

In the final session of the workshop, attendees participated in a round table discussion on the central themes, viz. logic, norms, agency, and their relation to AI. The organizers hoped that this discussion would generate ideas for future research and they were not disappointed in the output. Indeed, the discussion encompassed many different, fruitful perspectives. The following questions and remarks were raised at the round table:

  • A common theme of the invited talks: do we need formal models to describe agents?
  • Using models to assist human decision-making: a mission statement for computer science?
  • Are 'sub-optimally rational' agents a deep challenge for agent specification? Note that AI agents also need models of sub-optimally rational humans they interact with.
  • Can sub-optimality be a guide to agent design? For example, can agents learn to avoid common logical mistakes of human reasoning? (cf. "reinforcement learning")
  • Is the rationality of artificial agents merely instrumental, i.e. those that most efficiently achieve the stated goals of the clients for whom they are designed?
  • Can normative and descriptive understandings of rationality be integrated?
  • How do rationality and intelligence differ? (cf. Buridan's Ass)
  • A lesson of machine learning: logic may be important to rationality, but not intelligence? 

Further collaboration

Overall, the organizers were very pleased with this event and look forward to conducting further discussion and collaboration between interested parties in various disciplines. Do not hesitate to contact them if you have ideas for collaboration on these topics or want to be updated on their future events: 

Human-centered Artificial Intelligence

The workshop is organized within Utrecht University focus area Human-centered Artificial Intelligence (HAI). HAI bundles the various AI-activities undertaken at Utrecht University. AI in Utrecht has a unique interdisciplinary profile that pervades various departments, including computer science, philosophy, linguistics and psychology.