Cansu Canca, Ph.D.

Philosopher. Advisor. Associate Professor. | Founder + Director of AI Ethics Lab

GUIDING INDUSTRY & ACADEMIA IN RESPONSIBLE AI STRATEGY

Dr. Cansu Canca works with developers and deployers of AI systems to ensure the ethical design and implementation of AI. She advises executives, boards of directors, and investors on responsible AI strategy, governance, and risk mitigation, empowering them to navigate AI transformation with confidence.

Cansu serves as an expert advisor on ethics, advisory, and editorial boards, and advises Fortune 500 companies on responsible AI strategy. She works with the World Economic Forum’s AI Governance Alliance on developing guidelines and best practices for the industry and investors, and has co-designed responsible AI tools for United Nations and the INTERPOL. She is an appointed member of the World Health Organization’s Technical Advisory Group on AI.

She is the Founder and Director of AI Ethics Lab, a pioneering initiative focused on advising practitioners and conducting multidisciplinary research on AI ethics, and the Founding Director of the Responsible AI Practice at Northeastern University, where she is also a Research Associate Professor.

Mozilla Rise25 Awards

Time Magazine Feature

World Economic Forum

Collaboration with WEF: “Responsible ai playbook for investors”

“This white paper explores the essential role of investors in advancing the adoption of RAI. Based on extensive research and over 100 stakeholder interviews, the paper encourages investors to engage with corporate boards, investment partners and the broader ecosystem to promote RAI adoption. It highlights the necessity of strong governance frameworks and clear RAI standards designed to ensure AI applications are honest, helpful and harmless. It underscores how RAI can mitigate risks and meet regulatory requirements while driving growth through enhanced customer trust and brand reputation.

Finally, it offers a playbook to help investors balance immediate efforts to harness AI’s potential with a longer-term prudent approach to foundational concerns.” Download the playbook here.

PUBLICATION ON IEEE: ‘FOR ARGUMENT’S SAKE, SHOW ME HOW TO HARM MYSELF!’ JAILBREAKING LLMS IN SUICIDE AND SELF-HARM CONTEXTS

“Recent advances in large language models (LLMs) have led to increasingly sophisticated safety protocols and features designed to prevent harmful, unethical, or unauthorized outputs. However, these guardrails remain susceptible to novel and creative forms of adversarial prompting, including manually generated test cases. In this work, we present two new test cases in mental health for (i) suicide and (ii) self-harm, using multi-step, prompt-level jailbreaking and bypass built-in content and safety filters. We show that user intent is disregarded, leading to the generation of detailed harmful content and instructions that could cause real-world harm.”

Read our paper on LLMs and safety here.

Read the coverage on our paper on TIME Magazine and L.A. Times.

Responsible AI for Leaders: Executive Education

“Our transformative executive education course Responsible AI for Leaders helps you navigate the complexities of AI ethics and governance. This premier program equips leaders across industries with actionable strategies and practical tools, providing step-by-step guides to ethically integrate AI into your business. Set new standards in innovation and leadership by learning to develop and execute a comprehensive Responsible AI (RAI) strategy.”

Why it’s important for business executives to lead the way with a strong and ethical AI frameworkRead the Northeastern Global News article covering our Executive Education here.

PUBLICATION ON ACM: WHY THE GAMING INDUSTRY NEEDS RESPONSIBLE AI

“Incorporating AI into the development, operation, and servicing of video games adds new issues to an already-complex landscape of ethical concerns. Practices, tools, and governance structures developed in responsible AI can offer effective ways to navigate this complexity.”

Read our paper on Responsible AI in Gaming here.

“As artificial intelligence transforms gaming, Northeastern researchers urge industry to adopt responsible AI practices” Read the Northeastern Global News article covering our work here.