To shine a spotlight on AI-focused women academics and individuals who have made significant contributions to the AI revolution, TechCrunch is initiating a series of interviews dedicated to showcasing these remarkable women. This initiative aims to acknowledge their invaluable work that often remains unnoticed amidst the AI boom. Additional profiles can be accessed here.
Sandra Wachter, a distinguished professor and senior researcher specializing in data ethics, AI, robotics, algorithms, and regulation at the Oxford Internet Institute, stands out as a prominent figure in this domain. Having served as a former fellow at The Alan Turing Institute, the U.K.’s premier institute for data science and AI, Wachter has delved into evaluating the ethical and legal dimensions of data science. Her focus on cases where opaque algorithms exhibit discriminatory tendencies, particularly in terms of race and gender, underscores her commitment to addressing such critical issues. Furthermore, her exploration of methods to audit AI systems to combat misinformation and uphold fairness underscores her multifaceted expertise.
Q&A
How did you embark on your journey in AI, and what captivated you about this field?
My fascination with the potential of innovation and technology to enhance people’s lives has been ingrained in me since as long as I can remember. However, I am acutely aware of the detrimental impact that technology can have on individuals. Driven by a strong sense of justice, I have always aspired to strike a balance that fosters innovation while safeguarding human rights.
Recognizing the pivotal role of law in achieving this equilibrium, I naturally gravitated towards it. Unraveling the intricacies of legal frameworks, identifying loopholes, and devising strategies to close them have always been my forte.
AI, being an immensely transformative force permeating various sectors such as finance, healthcare, and criminal justice, can yield both positive and negative outcomes. The determining factor lies in its design and the policies governing its deployment. The potential of law to shape the ethical contours of AI and ensure equitable innovation has been a compelling draw for me.
Which accomplishment in the AI realm brings you the most pride?
One of my most esteemed collaborative works, alongside Brent Mittelstadt (philosopher) and Chris Russell (computer scientist), centers on bias and fairness in machine learning. Our research, titled “The Unfairness of Fair Machine Learning,” sheds light on the adverse implications of enforcing numerous “group fairness” metrics in practice. It elucidates how the pursuit of fairness through “leveling down” can detrimentally impact all parties involved, rather than empowering disadvantaged groups. This approach not only raises significant ethical concerns but also conflicts with EU and U.K. anti-discrimination laws. Our endeavors to disseminate these findings, including engagements with regulators, aim to catalyze policy reforms that mitigate the potential harms stemming from AI systems.
How do you navigate the challenges posed by the predominantly male-centric tech and AI industries?
My perception of technology as a gender-neutral domain was reshaped during my schooling years when societal norms attempted to confine me within predefined gender roles. Despite encountering such biases, I have been fortunate to collaborate with supportive allies like Brent Mittelstadt and Chris Russell, benefit from invaluable mentorship, and foster connections with a diverse network of individuals striving to drive inclusivity in tech.
What advice would you offer to women aspiring to venture into the AI sphere?
Forge connections with like-minded peers and advocates as collective support plays a pivotal role in overcoming barriers. Embracing interdisciplinary collaborations and challenging conventional wisdom are key to fostering innovative solutions and driving progress.
What are the primary challenges that AI confronts as it evolves?
AI grapples with a myriad of issues necessitating robust legal and policy frameworks. From biased datasets fueling discriminatory outcomes to the inherent opacity of AI systems determining critical decisions across various domains, the need for ethical oversight is paramount. Addressing these challenges mandates urgent attention and proactive measures.
What considerations should AI users bear in mind?
Amidst the prevailing narrative heralding AI’s omnipresence and indispensability, users must critically assess the motives behind this narrative and its beneficiaries. Scrutinizing the actual beneficiaries of AI implementation and evaluating its tangible benefits are crucial steps. By discerning the areas requiring enhancement and the potential of AI to effect positive change, users can steer the trajectory of AI innovation towards community-centric utility rather than mere profitability.
How can responsible AI development be fostered effectively?
Instituting stringent regulations mandating responsible AI practices is imperative. Dispelling the misconception that regulations stifle innovation, it is essential to recognize that ethical regulations nurture innovation while safeguarding human rights. Drawing parallels with safety regulations in other industries underscores the significance of ethical oversight in mitigating potential harms associated with AI advancements.
How can investors advocate for responsible AI practices more effectively?
Investors can champion responsible AI by acknowledging the intrinsic value of ethical, unbiased, and sustainable AI solutions. Emphasizing the link between ethical practices and product quality underscores the profitability of responsible innovation. By viewing ethics as an investment rather than a hindrance, investors can propel the adoption of ethical AI practices, thereby fostering a more sustainable and equitable technological landscape.