We are thrilled to announce an incredible lineup of speakers for our upcoming panel discussion, "The Future of Safe and Ethical AI," taking place on April 29, 2025 at UC Berkeley Law.

Join us for an engaging conversation on how AI regulations are evolving, how industry leaders are approaching responsible AI, and the broader policy landscape shaping the future of AI safety.

📆 Date: April 29, 2025

🕥 Time: 10:30 a.m. – 2:00 p.m.

📍 Location: Goldberg Room, UC Berkeley Law on 225 Bancroft Way, Berkeley, CA

🔗 Register here: RSVP


Introducing the Sponsor and Cohosts:

BASIS_logo.png

@Berkeley AI Safety Student Initiative (BASIS)

Special thank you to BASIS for sponsoring this event!

PHOTO-2025-04-21-13-14-47.jpg

@AI @ Berkeley Law (AI@BL)

bclt logo transparent_stacked.png

@Berkeley Center for Law & Technology (BCLT)


Introducing the Speakers

Lothar Determann

Partner, Baker & McKenzie

Headshot I Lothar Determann 2024.jpg

Lothar Determann practices and teaches international data privacy, technology, commercial and intellectual property law. At Baker McKenzie in San Francisco and Palo Alto, he has been counseling companies since 1998 on data privacy law compliance and taking products and business models international. Admitted to practice in California and Germany, he has been recognized as one of the top 10 Copyright Attorneys and Top 25 Intellectual Property Attorneys in California by the San Francisco & Los Angeles Daily Journal and as a leading lawyer by Chambers, Legal 500, Thompson Reuters, IAM and others. For more information see www.bakermckenzie.com. Contact: ldetermann@bakermckenzie.com.

Prof. Dr. Determann has been a member of the Association of German Public Law Professors since 1999 and teaches Data Privacy Law, Computer Law and Internet Law at Freie Universität Berlin (since 1994), University of California, Berkeley School of Law (since 2004) and UC Law SF (since 2010), Stanford Law School (2011) and University of San Francisco School of Law (2000-2005). He has authored more than 170 articles and treatise contributions, including Healthy Data Protection (http://ssrn.com/abstract=3357990), Electronic Form over Substance (http://ssrn.com/abstract=3436327) and No One Owns Data (https://ssrn.com/abstract=3123957), as well as 6 books, including Determann’s Field Guide to Data Privacy Law (6th Edition, 2025, also available in Arabic, Chinese, French, German, Hungarian, Italian, Japanese, Korean, Portuguese, Russian, Spanish, Turkish and Vietnamese), California Privacy Law - Practical Guide and Commentary on U.S. Federal and California Law (th Ed. 2023) and Determann's Field Guide to Artificial Intelligence Law (2024, also available in Chinese, German and Spanish).

Sheila Leunig

General Counsel, Leunig Law Group

Screenshot 2025-04-21 at 11.35.20 PM.png

Sheila Leunig is an executive legal and business strategist who provides fractional General Counsel services to help scale early-stage companies, the development and commercialization of emerging technologies, and the creation of risk management and AI governance programs. Her work has impacted the growth and success of video game publishers and developers, ad tech, and B2B SaaS platform companies including Zynga, Eidos, Ubisoft, LucasArts, AdRoll, Gainsight, the Partnership on AI, and Credo AI, where she has provided leadership to drive operational efficiency and accelerate growth.

Known for bridging technical, legal, and business perspectives, Sheila has extensive expertise in AI development, intellectual property, privacy, corporate governance, and compliance. Sheila is also a recognized leader in Responsible AI governance, helping companies to operationalize ethical AI principles to enable AI adoption and innovation while adhering to emerging global, federal, and state  AI legislative frameworks and best practices.

In addition to her fractional legal services practice, Sheila is an advisor at the UC Berkeley Law AI Institute, and co-host of Not Another AI Podcast, a podcast recently launched by Women Defining AI to elevate the stories of women and non-binary individuals who are working to harness the power of AI. She is a member of several nonprofit boards, and speaks regularly to provide thought leadership on various aspects of emerging technology start-up growth, AI governance, and women in leadership.

Alisar Mustafa

Head of AI Policy at Duco

Screenshot 2024-12-30 at 2.18.06 PM.png

Alisar Mustafa is the Head of AI Strategy at Duco, where she leads AI strategy, safety, and governance initiatives for enterprise clients to ensure they operate safely, securely, and responsibly. She is passionate about translating policy goals into practical, technical implementation. Her current work focuses on developing human-generated datasets for AI safety fine-tuning, with an emphasis on improving model factuality and alignment across controversial topics, diverse languages, and regional contexts. Prior to Duco, she co-founded CivicSync, where she led the company’s AI policy and safety work. She also worked on responsible innovation at Meta and has advised governments, startups, and international organizations on AI strategy, policy, and risk mitigation. She authors the AI Policy Newsletter, providing insights on emerging regulations and industry developments. Alisar holds a Master of Science in Public Policy and Management from Carnegie Mellon University and a Bachelor of Arts in Political Science from San Francisco State University.

Isabel Hanh

AI Policy @ EU Delegation to the US

1732443652107.jpg

Isabel Hahn is responsible for AI Policy at the EU Mission to the US. Isabel also sits officially as part of the AI Office at the European Commission, which is the body responsible for overseeing compliance of General Purpose AI models with the EU AI Act’s obligations. Prior to joining the EU Mission to the US, Isabel worked as Member of Cabinet at the EU’s data privacy regulator, the European Data Protection Supervisor. Isabel holds law degrees from Harvard Law School and the London School of Economics and is a licensed Attorney at Law in the State of New York.

Min Wu

Postdoctoral Scholar at Stanford University

Screenshot 2025-04-22 at 7.00.09 PM.png

Dr. Min Wu is a Postdoctoral Scholar with Prof. Clark Barrett at the Department of Computer Science, Stanford University. She is also affiliated with Stanford Center for AI Safety and Center for Automated Reasoning. Previously, she completed her PhD in Computer Science under the supervision of Prof. Marta Kwiatkowska at the University of Oxford.

Her research focuses on safe and trustworthy AI with verifiable guarantees, positioned at the intersection of AI and formal methods. The grand vision of her work is to develop AI systems, particularly those deployed in high-stakes applications, that are verifiably reliable and transparent.

Introducing the Moderator: