Inclusive AI

Pioneering equity based AI systems with the Majority World

The Inclusive AI Group is a global public and private sector consortium dedicated to fostering collaboration and co-design between critical changemakers who are design experts in the technology industry, AI ethicists, civic actors, and digital anthropologists.

Artificial Intelligence (AI) have permeated all sectors and industries and promises much opportunity to address the formidable global challenges we face today. However, they also pose significant challenges and disruptions. AI are trained on datasets that often represent WEIRD (Western Educated Industrialized Rich and Democratic) societies and contexts. Data used to train AI tools dominantly draw from Anglo-Saxon, middle class, heterosexual, and white demographics. These datasets lack diversity, especially in the representation of the Global South where 90 percent of the world’s youth currently live – and constitutes the majority users.

Our mission is to help build inclusive and sustainable AI data, tools, services, and platforms that prioritize the needs, concerns, experiences, and aspirations of chronically neglected user communities and their environments, with a special focus on the Global South.

Social justice and planetary wellbeing

To ensure that AI is optimized for social justice and planetary wellbeing, our group (through our core projects below) actively engages in the conceptualizing, co-creating, and deployment of inclusive AI based technologies through a people and planet centered approach. We explore how to equitably, ethically, and creatively approach dimensions of representation, contribution, attribution, ownership, and value, and how to translate insights into design strategies that work for end users most neglected. By uniting diverse voices and expertise, the Inclusive AI Group aims to lead the charge in creating AI technologies that are not only cutting-edge but also equitable and sustainable.

In this moment of rapid global technological change, it is more urgent than ever that private and public sector leaders come together to ensure that we are advocating for AI approaches that are truly equitable and sustainable, and do not simply further the biased approaches of a privileged few.

Highlights

  • Feminist Futures of Work Initiative

    FemLab is a researcher activist cooperative that seeks to envision and enact how digital platforms may be optimized to enhance self-actualization, representation, and collectivization in a changing and increasingly precarious market and society.
    Read more

  • Creative AI and the Next Billion Creatives

    This research project aims to unpack what constitutes as digital creativity and the creator economy in the Global South, with a focus on Gen Z populations (18-25 years) from resource constrained contexts in India. The goal is to critically assess the shifts in how creativity is defined, being learnt, and perhaps even monetized among these youth. We delved deeper into what kinds of content do youth aspire to make and why and the choice of design tools in enabling their aspirational creations.
    Read more

  • Fairness and Intersectional Non-Discrimination in Human Recommendation

    Algorithmic hiring is the usage of tools based on Artificial intelligence (AI) for finding and selecting job candidates. As other applications of AI, it is vulnerable to perpetuate discrimination. Considering technological, legal, and ethical aspects, the EU-funded FINDHR project will facilitate the prevention, detection, and management of discrimination in algorithmic hiring and closely related areas involving human recommendation.
    Read more

Co-PI's

Inclusive AI Consortium

Our objective is to facilitate dialogue outside our comfort zones, and particularly to bridge academia and the civil society with the technology industry. Given the success of past collaborations between Professor Arora and Laura Herman, Head of AI research at Adobe, and the rarity of such a partnership, they will co-lead this consortium and set precedent where more such academic-industry collaborations can take place in aligning AI tools with contemporary social and planetary values, and training others to become critical translators, listeners, and changemakers within their own sectors.

Their collaborators include Institute of Technology & Society Rio, Algorithm Watch, Futur2, General Electric, KPMG, IDEO, and British Council.

Events & Publications

  • Book Launch

    Book launch of prof. Payal Arora’s new book ‘From Pessimism to Promise: Lessons from the Global South on Designing Inclusive Tech’

    Date: October 8th

    Read more

  • From Pessimism to Promise

    Lessons from the Global South on Designing Inclusive Tech

    Pub date: September 3, 2024
    Publisher: The MIT Press

    Read more

     

  • Narratives of Digital Ethics

    AGIDE (Academis For Global Innovation And Digital Ethics) report

    Pub date: June 26, 2024
    Publisher: Austrian Academy of Sciences

    Read more

  • Recommendations on the Use of Synthetic Data to Train AI Models

    Policy Guideline

    Pub date: 14 February, 2024
    Publisher: United Nations University

    Read more

Consortium Members