Designing Ethical Artificial Intelligence for All

A former Cognitive Science student wants AI ethics to take everyone into account

How do we design ethical AI systems? Considering the pervasive nature of AI systems, how do we make sure the experiences of people from a variety of backgrounds are accounted for in their design processes?

Jocelyn Wong, a former Honours Cognitive Science student, wanted to explore these questions. She wanted to know how the processes of designing AI systems can be audited in a way that takes into account the lived experiences of people from all walks of life.

In a relatively new field such as AI ethics, “A lot of the proposals that came out about auditing were about the process and what procedures should be in place, but they weren’t really thinking about the types of people are involved, and even if they did, it was always in the context of the organization,” said Wong. “There wasn’t any sort of consideration for who these people are more broadly.”

Wong was one of CDSI’s BMO Jr. Responsible AI Scholars last year: This award goes to projects that promote the responsible use of AI in a variety of different domains, including art, policy, and decision-making. She spent all of last summer researching her ideas under the supervision of professors Jocelyn Maclure and AJung Moon.

To do this, she looked at a real-life 2022 case study published in the journal AI and Ethics titled “.” This study followed an EBA of the pharmaceutical company AstraZeneca. After following the company’s activities for a year, the researchers found the main challenges with conducting an EBA were what they called “classical governance challenges,” including “ensuring harmonized standards across decentralized organizations, demarcating the scope of the audit, driving internal communication and change management, and measuring actual outcomes.”

Based on her analysis of this study, Wong concluded that many demographics are underrepresented in auditing AI systems, and that the roots of this imbalance in the technology field go deep. Her project, “Non-Technical Operationalization in AI Ethics Audits: Reconsideration of Stakeholders Through Black Feminism,” argues for an intersectional approach to understanding how technology can be used as an instrument of power.

Wong relies in particular on the Matrix of Domination framework, first articulated by the American academic Patricia Hill Collins, to analyze stakeholder dynamics in EBA. She argued EBAs should be enhanced by a Matrix of Domination with seven axes: Religion, socioeconomic status, sexual orientation, organizational role, nationality, race, and gender. Through her research, Wong hopes to encourage dialogue between those from more technical fields and those interested in democratic control and governance of technology.

If you’re a student with an idea for an AI-related research project, start brainstorming now so you can be ready for our call for applications early in 2026.

Learn more about past projects and how the BMO Jr. Awards can help you in your research.

Back to top