Moderating Social XR: AI-based Harassment Detection
Social extended reality (XR), in which users interact with each other in immersive, virtual environments, is increasingly popular. However, users are also increasingly affected by harassment. In this project, we are creating AI models that can identify such harassment in non-verbal behaviors (e.g., gestures or invasion of personal space). This provides insights into how AI models understand social interactions between humans. Further, this is useful for moderation of online spaces by automatically detecting potential harassment, which human moderators review and then use for moderation, e.g., to remove harassing users from platforms. With this, it helps keep online spaces safe.
Researchers
Maarten Gerritse
Student researchers
Caesar Alpha Irawan, Iris Folpmers
Academic supervisors
Dr. Julian Frommel
Grant funding agency; (co-)funding (non-)academic partners
NWO NGF - AiNed XS Europa