Written by 3:47 am AI policies

### Head of Global Policy at Hugging Face: Irene Solaiman’s Impact in AI

Irene Solaiman, profiled as part of TechCrunch’s series on women in AI, builds and leads AI p…

To shine a light on deserving women academics and professionals in the AI domain, TechCrunch is introducing a series of interviews that focus on outstanding women who have made significant contributions to the AI revolution. These interviews will be featured periodically throughout the year to coincide with the ongoing AI advancements, shedding light on impactful work that often goes unnoticed. For more profiles, you can explore additional articles here.

Irene Solaiman embarked on her AI journey as a researcher and public policy manager at OpenAI, spearheading a novel approach to unveiling GPT-2, a precursor to ChatGPT. Following her tenure as an AI policy manager at Zillow, she assumed the role of head of global policy at Hugging Face. Her duties encompass a wide spectrum, from formulating and overseeing global AI policies for the company to conducting socio-technical research initiatives.

Additionally, Solaiman provides counsel to the Institute of Electrical and Electronics Engineers (IEEE) on AI matters and is acknowledged as an AI expert at the intergovernmental Organization for Economic Co-operation and Development (OECD).

Irene Solaiman, Leading Global Policy at Hugging Face

In Brief, What Sparked Your Interest in AI and How Did You Initiate Your Career in This Field?

The pathway to AI often unfolds in a non-linear fashion. My fascination with AI was kindled through science fiction media, a common starting point for many individuals, including those navigating through their teenage years with social intricacies. Initially studying human rights policy, I later delved into computer science courses, viewing AI as a tool to address human rights issues and shape a brighter future. Engaging in technical research and steering policy within a realm brimming with unanswered queries and unexplored avenues keeps my professional journey dynamic and fulfilling.

Which Accomplishments in the AI Sector Are You Particularly Proud Of?

I take pride in the resonance of my expertise across the AI landscape, particularly in my writings on release considerations within the intricate realm of AI system introductions and transparency. Witnessing discussions among scientists triggered by my paper on an AI Release Gradient framework, along with its incorporation into governmental reports, serves as validation and affirmation that I am progressing in the right direction. Personally, I find immense motivation in endeavors related to cultural value alignment, dedicated to ensuring that AI systems are optimized for the cultures in which they operate. Collaborating with my esteemed co-author and dear friend, Christy Dennison, on a project focused on Adapting Language Models to Society was not only a labor of love but also a pivotal effort that has influenced safety and alignment initiatives today.

How Do You Navigate the Challenges Presented by the Male-Dominated Tech and AI Sectors?

Navigating through the male-dominated tech industry, and by extension, the AI domain, involves connecting with like-minded individuals — from collaborating with visionary company leaders who share a deep commitment to the same causes I champion, to partnering with research co-authors with whom I can embark on each working session akin to a therapeutic exchange. Building communities through affinity groups plays a pivotal role in fostering solidarity and exchanging insights. It is crucial to underscore the significance of intersectionality; my interactions within communities of Muslim and BIPOC researchers serve as a perpetual wellspring of inspiration.

What Advice Would You Offer to Women Aspiring to Venture into the AI Field?

Establish a support network where the achievements of others become your own triumphs. In simpler terms, find your “girl’s girl” in professional settings. The women and allies with whom I embarked on this journey are not only my preferred companions for coffee dates but also my go-to confidants during late-night calls preceding deadlines. One of the most valuable career insights I’ve encountered, courtesy of Arvind Narayan on the erstwhile Twitter platform, revolves around embracing the “Liam Neeson Principle” — not necessarily being the most intelligent individual, but possessing a distinct skill set that sets you apart.

What Are Some of the Most Urgent Challenges Confronting the Evolution of AI?

The paramount challenges in the AI domain are ever-evolving, underscoring the critical need for international cooperation to foster safer systems that cater to diverse populations. Preferences and perceptions of safety vary among individuals who utilize and are impacted by these systems, even within the same geographical boundaries. The emergence of issues hinges not only on the trajectory of AI advancement but also on the contextual backdrop in which these technologies are deployed. Regional disparities, such as heightened vulnerabilities to cyber threats in more digitized economies, underscore the nuanced landscape of AI challenges.

What Concerns Should AI Users Be Mindful Of?

It is imperative to recognize that technical solutions seldom offer comprehensive mitigation of risks and harms. While enhancing AI literacy is pivotal, users must invest in a spectrum of protective measures to address evolving risks. For instance, exploring watermarking as a technical tool shows promise, alongside the necessity for cohesive guidance from policymakers concerning the dissemination of generated content, particularly on social media platforms.

What Constitutes the Responsible Development of AI?

Responsible AI development entails active involvement of stakeholders affected by these advancements, coupled with a continual reassessment of our methodologies for evaluating and implementing safety protocols. Both the positive applications and potential drawbacks of AI are subject to ongoing evolution, necessitating iterative feedback loops. The process of enhancing AI safety should be a collective endeavor within the field, characterized by a perpetual quest for improvement. The evaluation standards for models in 2024 are markedly more stringent than those observed in 2019. While human evaluations offer substantial value, the standardization of assessments is gaining traction, reflecting a shift towards more robust evaluation frameworks.

How Can Investors Advocate for the Ethical Advancement of AI?

Investors are already actively engaging in initiatives to promote safety and policy frameworks within the AI landscape. It is heartening to witness numerous investors and venture capital entities participating in dialogues on safety and policy, including through open letters and Congressional testimonies. I eagerly anticipate further insights from investors regarding strategies that foster innovation across diverse sectors, especially considering the expanding utilization of AI beyond traditional tech domains.

Visited 2 times, 1 visit(s) today
Tags: Last modified: February 18, 2024
Close Search Window
Close