To shine a spotlight on deserving AI-focused women academics and professionals, TechCrunch is initiating a series of interviews that celebrate the contributions of remarkable women in the AI revolution. Throughout the year, TechCrunch will feature various profiles to acknowledge the essential work often overlooked in the AI landscape. Explore more profiles here.
Claire Leibowicz, the leader of the AI and media integrity program at the Partnership on AI (PAI), a consortium supported by major industry players like Amazon, Meta, Google, and Microsoft, is at the forefront of promoting the responsible deployment of AI technology. She also supervises PAI’s AI and media integrity steering committee.
In 2021, Leibowicz served as a journalism fellow at Tablet Magazine, followed by a fellowship at The Rockefeller Foundation’s Bellagio Center in 2022, focusing on AI governance. With a BA in psychology and computer science from Harvard and a master’s degree from Oxford, Leibowicz provides guidance on AI governance, generative media, and digital information to companies, governments, and non-profit organizations.
Q&A
How did you venture into the AI domain and what captivated you about this field initially?
Despite seeming paradoxical, my entry into the AI realm stemmed from a fascination with human behavior. Growing up in New York, I was intrigued by the myriad interactions and the formation of a diverse society. Delving into profound questions concerning truth, justice, trust, intergroup conflicts, and belief systems led me to explore these themes through cognitive science research. It became evident that technology significantly influences the answers to these questions, and the allure of artificial intelligence as a reflection of human intelligence captivated me.
This journey led me to computer science classrooms where luminaries like Professor Barbara Grosz, a pioneer in natural language processing, and Professor Jim Waldo, blending philosophy with computer science, emphasized the importance of incorporating diverse perspectives to address the social impacts of technology, including AI. This interdisciplinary approach highlighted the broader implications of technology beyond technical aspects, spanning geopolitics, economics, and social dynamics, underscoring the need for a multifaceted approach to tackle technological issues.
Whether it’s educators exploring the impact of generative AI tools on pedagogy, museum curators experimenting with predictive exhibit routes, or doctors exploring innovative image detection methods, AI permeates various domains. The inherent intellectual diversity in working with AI intrigued me, presenting an opportunity to influence diverse facets of society.
Which AI-related accomplishment fills you with pride?
I take pride in initiatives that converge diverse viewpoints in an innovative and action-driven manner, fostering not just accommodation but encouragement of dissenting opinions. Joining PAI as its second staff member six years ago, I recognized the organization’s pioneering commitment to embracing diverse perspectives as fundamental to AI governance, mitigating harm, and driving practical adoption and impact in the AI sphere. Witnessing PAI’s evolution alongside the AI landscape has been gratifying, particularly in shaping the organization’s embrace of multidisciplinarity.
Our endeavors in synthetic media over the past six years, predating the widespread awareness of generative AI, epitomize the potential of multistakeholder AI governance. Collaborating with nine organizations spanning civil society, industry, and media in 2020, we crafted Facebook’s Deepfake Detection Challenge, a machine learning competition aimed at developing models to identify AI-generated media. These external perspectives influenced the fairness and objectives of the winning models, showcasing how human rights experts and journalists contribute to issues like deepfake detection. Last year, we released normative guidelines on responsible synthetic media — PAI’s Responsible Practices for Synthetic Media — endorsed by 18 entities from diverse backgrounds, including OpenAI, TikTok, Code for Africa, Bumble, BBC, and WITNESS. Formulating actionable guidance informed by technical and social realities is impactful, but garnering institutional support underscores a significant milestone. Institutions committed to issuing transparency reports on their navigation of the synthetic media landscape. Projects that offer tangible guidance and demonstrate its implementation across institutions hold profound significance for me.
How do you navigate the challenges posed by the male-dominated tech and AI industries?
Throughout my career, I’ve been fortunate to have both male and female mentors who have provided invaluable support and constructive challenges, fostering my growth. Engaging in discussions on shared interests and the fundamental questions driving the AI domain has proven instrumental in uniting individuals from diverse backgrounds and perspectives. Interestingly, PAI’s team comprises over half women, and many organizations focusing on AI and responsible AI initiatives boast a significant female presence, marking a positive stride towards gender representation in the AI ecosystem.
What guidance would you offer to women aspiring to enter the AI realm?
As highlighted earlier, spaces within AI that are predominantly male-dominated often lean towards technical domains. While technical proficiency should not overshadow other forms of literacy in AI, possessing technical expertise has bolstered my confidence and efficacy in such environments. Achieving gender parity in technical roles and acknowledging the expertise of individuals from fields like civil rights and politics, which exhibit more balanced representation, are pivotal in fostering inclusivity within the AI sphere. Equipping more women with technical acumen is crucial for balancing gender representation in AI.
Establishing connections with women in the AI domain who have navigated the complexities of balancing professional and personal life can be immensely rewarding. Seeking guidance from role models on career-related and parenthood dilemmas, alongside addressing the unique challenges women encounter in the workplace, equips individuals to tackle these obstacles effectively.
What are the most critical challenges AI confronts as it progresses?
The evolving landscape of AI introduces intricate dilemmas concerning truth and trust in both online and offline realms. With AI capable of generating or altering various content forms — from images to videos to text — the veracity of information becomes increasingly convoluted. Amidst this backdrop, questions arise: Can we trust what we perceive? How reliable is evidence when documents can be realistically manipulated? Is it feasible to maintain human-exclusive online spaces when replicating a real individual via AI is effortless? Navigating the trade-offs AI presents between free expression and potential harm poses a significant quandary. Broadly, ensuring that the information ecosystem integrates perspectives from a global spectrum, including the public, rather than being monopolized by a select few entities, emerges as a crucial challenge.
In addition to these specific concerns, PAI delves into various facets of AI and society, encompassing fairness and bias in the era of algorithmic decision-making, the reciprocal impact of AI on labor, responsible AI system deployment, and enhancing AI systems to reflect diverse viewpoints. At a foundational level, deliberating on how AI governance can reconcile extensive trade-offs by encompassing diverse perspectives is imperative.
What should AI users be mindful of?
Primarily, AI users should exercise caution when encountering propositions that appear too good to be true. The surge in generative AI innovations has sparked hyperbolic and often inaccurate portrayals of AI capabilities. Understanding that AI functions as an amplifier of existing issues and opportunities rather than a revolutionary force is crucial. This realization should not diminish the seriousness with which AI is approached but rather serve as a framework for navigating an AI-inundated environment effectively. Maintaining a discerning outlook on the familiarity of AI challenges while acknowledging the distinct aspects of the current landscape empowers users to engage with AI judiciously.
What constitutes responsible AI development?
Responsible AI development necessitates an expanded perspective on the stakeholders involved in shaping AI initiatives. While technology companies and social media platforms play a pivotal role in influencing the impact of AI systems, a diverse array of institutions spanning civil society, industry, media, academia, and the public must engage collaboratively to forge responsible AI solutions that serve the public interest.
Consider the responsible creation and deployment of synthetic media as an example. While technology companies may grapple with the responsibilities tied to the influence of synthetic videos on users before elections, journalists may express concerns regarding imposters fabricating synthetic videos under trusted news brands. Human rights advocates may contemplate the responsibilities associated with the diminished impact of videos as evidence of abuses due to AI-generated media. Simultaneously, artists may find creative avenues through generative media but harbor apprehensions about unauthorized utilization of their creations to train AI models for generating new media. These diverse considerations underscore the criticality of involving various stakeholders in endeavors aimed at responsibly building AI, illustrating how different institutions are both impacted by and influencing the integration of AI into society.
How can investors advocate for responsible AI practices more effectively?
Reflecting on DJ Patil’s insightful revision of the “move fast and break things” ethos to “move purposefully and fix things,” encapsulates the essence of fostering responsible AI practices without impeding progress. Investors can play a pivotal role in fostering this mindset, allowing portfolio companies ample time and leeway to embed responsible AI practices without stifling innovation. Frequently, institutions cite time constraints and stringent deadlines as barriers to prioritizing ethical conduct, and investors possess the potential to catalyze a shift in this dynamic.
My tenure in the AI domain has prompted contemplation of profound human-centric queries, necessitating collective responses from all stakeholders involved.