Call for proposals – Interdisciplinary Workshop on: Who Are ‘Humans’ in Military AI?
Utrecht University and the Asser Institute are organising a workshop on 13 June 2025 to understand and evaluate who are ‘humans’ involved in – or excluded from – the development and use of AI systems in the military domain. Interested researchers and practitioners are invited to submit abstracts of up to 500 words. The workshop will be combined with a keynote lecture on 12 June, open to students, researchers, practitioners, and the wider public.
AI in military operations
The use of AI in military operations is understood as transforming military capabilities, following the precedents of transformation brought by nuclear, aerospace, cyber, and biotechnologies. Recently, algorithms have been employed at a large scale and in densely populated urban areas, in the conflicts in Gaza, Ukraine, Yemen, Iraq, and Syria. In the Gaza war ongoing since October 2023, it has been reported that the Israel Defense Forces used a series of AI-enabled decision support systems (AI-DSS) to generate targets in a manner that ultimately imposed extremely high civilian costs.
Over the last decade, ethical, humanitarian, and legal concerns have been raised with regard to autonomous weapons systems (AWS) and other AI-enabled military systems (such as AI-DSS). A series of diplomatic talks have taken place since 2014 within the UN Convention on Conventional Weapons (CCW) to discuss potential regulation applicable to ‘lethal’ AWS
Meaningful human control
‘Meaningful human control’ has been one of the pillars in the decade-long debate on the regulation of AWS. Several other similar terms have also been used (such as ‘human agency’, ‘appropriate levels of human judgment’, and ‘direct control and supervision of humans’). Yet ‘meaningful human control’ has been given importance both by State and non-governmental organisations at the CCW. The concept has also been used in the discussion of broader AI-enabled military systems, beyond AWS.
Interdisciplinary workshop
This workshop will critically look at the political construction of ‘humans’ involved in—or excluded from—the process of controlling and influencing the development and use of AI-enabled military systems. In a similar vein, the workshop will also look at ‘humans’ who are most affected by the use of such weapon systems as part of ‘human-machine-human interactions’.
What are the particular assumptions casted against ‘humans’ involved in the development or use of AI-enabled military systems? What critiques should we raise with regard to the assumptions embedded in the concept of humans, for instance from gender, race, and ethnicity standpoints? Who are actually present in, or absent from, groups of humans who may produce or interact with machines—such as military commanders, military legal advisors, executives of defence companies, program developers, engineers, and, ultimately, civilians in hostilities?
Call for proposals and timeline
Interested researchers and practitioners are invited to submit abstracts of up to 500 words. Deadline for submitting abstracts: 1 December 2024 (to be sent to GDS@uu.nl and DILEMA@asser.nl).
The following information must also be provided with each abstract:
- The name and affiliation of the author(s);
- A short biography of the author(s);
- The email address of the author(s).
Authors of selected abstracts will be notified by 20 December 2024 about the outcome of the selection.
In selecting the proposals, we will prioritise the inclusion of diverse perspectives, especially those from the groups who are under-represented in related academic communities in the Netherlands, and act according with the UU Equality, Diversity and Inclusion (EDI) plan. Authors of accepted abstracts should submit their draft (4,000-6,000 words) by 1 May 2025. The draft will be circulated among the workshop participants. A limited amount of funds is available to cover the invited speakers’ travelling expenses and accommodation for the sake of the workshop.
About the organizers
This workshop is an interdisciplinary initiative of Utrecht University, co-organised and financed by the focus area Governing the Digital Society and the Institutions for Open Societies' platform Contesting Governance – with the Asser Institute’s DILEMA project (Designing International Law and Ethics into Military Artificial Intelligence).
For substantive questions, please contact Dr. Machiko Kanetake.