
Mozilla, August 2024 – Mozilla Rise25 awards
“Philosopher and AI Ethics Lab founder Cansu Canca, a change agent honoree, spoke passionately about the critical importance of ethical design in AI, highlighting how every decision made in the design process has the potential to shape society.
“When we design AI systems, we’re not just making technical choices. We’re making moral and philosophical decisions,” she said. Cansu challenged developers to consider questions that go beyond code: “What is a good life? What is a better society?” These questions, she argued, should guide every step of AI development.”

Responsible AI Institute, December 2024 – Leadership in Responsible AI Awards
Canca and her team received nominations in 3 categories:
- Outstanding Individual: Cansu Canca, Director of Responsible AI Practice at EAI, Northeastern University
- Outstanding Organization: The Institute for Experiential AI’s (EAI) Responsible AI Practice at Northeastern University
- Outstanding Initiative: Verizon’s Responsible AI Initiative (via their partnership with Responsible AI Practice at EAI)

NGN, July 2024 – “What do corporations need to ethically implement AI? Turns out, a philosopher”
“Cansu Canca is full of questions — but that’s her job.
The Director of Responsible AI Practice at the Institute for Experiential AI and a Research Associate Professor in the department of philosophy at Northeastern University, Canca has made a name for herself as an ethicist tackling the use of AI.
As the founder of the AI Ethics Lab, Canca maintains a team of “philosophers and computer scientists, and the goal is to help industry. That means corporations as well as startups, or organizations like law enforcement or hospitals, to develop and deploy AI systems responsibly and ethically,” she says.”

Forbes, March 2024 – “AI Safety – We’re Working on It” article on Canca’s talk
“In an analogy to AI, Canca described the risks of harm posed by AI: Research shows women, minorities, and marginalized groups tend to get fewer options or opportunities in systems driven by AI models.
The problem, she said, is that a lot of this harm and unfairness is baked in: it’s in the training data, in the models we chose, and in the trade-offs we incorporate into the AI system. Existing ethical problems only get magnified by the model.”

Radio Davos, June 2023 – “Responsible AI: how can philosophy help us make better tech?“ (podcast)
“The rise of generative artificial intelligence raises a lot of philosophical questions. So can philosophy help us make AI that serves humanity for the good?
On this episode we hear from ‘applied ethicist’ Cansu Canca, AI Ethics Lead at the Institute for Experiential AI, Northeastern University, USA; and from Sara Hooker, head of Cohere For AI, a research lab that seeks to solve complex machine learning problems.”

World Economic Forum, December 2023 – 3 Leading Thinkers (video)
“Artificial intelligence (AI) is a rapidly developing technology that has the potential to transform our lives in many ways. However, there is also a risk that AI could be used for harmful purposes. It is, therefore, important to ensure that AI is developed and used in a way that benefits humanity.
Cansu Canca is a philosopher and the founder and director of the AI Ethics Lab. She believes that AI reflects our own morality and that we need to prioritize strong ethical boundaries when developing AI systems. Canca argues that we can learn from the way that we regulate healthcare and that we need to develop similar ethical frameworks for AI.”