Written by 5:33 am AI, Discussions

### Senior Counsel Rashida Richardson at Mastercard: Championing AI Ethics and Privacy in the Tech Industry

Rashida Richardson, profiled as a part of TechCrunch’s series on women in AI, is senior couns…

To showcase the achievements of women in the AI field, TechCrunch is initiating a series of interviews focusing on notable women contributing to the AI revolution. This series aims to shed light on their impactful work that often remains unacknowledged amidst the AI boom. Stay tuned for more insightful profiles.

Rashida Richardson, holding the position of senior counsel at Mastercard, specializes in legal matters concerning privacy, data protection, and AI. Previously, she served as the director of policy research at the AI Now Institute and as a senior policy advisor for data and democracy at the White House Office of Science and Technology Policy. Richardson, an assistant professor at Northeastern University since 2021, concentrates on the intersection of race and emerging technologies.

Rashida Richardson, senior counsel for AI at Mastercard

How did you embark on your AI journey, and what captivated you about this field?

Originating from a background in civil rights law, my professional endeavors encompassed diverse areas such as privacy, surveillance, school desegregation, fair housing, and criminal justice reform. Witnessing the nascent stages of government integration of AI-based technologies during this time sparked my interest. I actively engaged in technology policy initiatives in New York State and City to advocate for enhanced oversight and evaluation of AI implementations. However, skepticism arose regarding the efficacy of AI solutions, particularly in addressing systemic issues like school desegregation or fair housing.

My prior experiences unveiled significant policy and regulatory gaps in the AI landscape, motivating me to delve deeper into this field. Recognizing the scarcity of individuals with a similar background in AI, I discerned an opportunity to contribute meaningfully and leverage my unique expertise.

Consequently, I directed my legal practice and academic pursuits towards delving into the policy and legal dimensions of AI development and utilization.

What accomplishment in the realm of AI brings you the most pride?

I find gratification in witnessing the escalating attention towards AI issues from various stakeholders, particularly policymakers. Historically, technology policy in the United States has often lagged behind, neglecting to adequately address pertinent issues. Several years ago, AI seemed poised to follow a similar trajectory, with policymakers exhibiting a lack of urgency or comprehension during engagements like U.S. Senate hearings. However, a notable shift has transpired recently, propelling AI into the forefront of public discourse and prompting policymakers to grasp the significance and imperative for informed action. This evolving landscape reflects an increased recognition among stakeholders, including industry players, regarding the distinctive benefits and risks associated with AI, necessitating novel policy interventions.

How do you navigate the challenges posed by the predominantly male-centric tech and AI industries?

As a Black woman accustomed to being a minority in various spheres, including the predominantly homogeneous tech and AI sectors, I draw upon my past encounters to navigate these challenges. While these industries exhibit a glaring lack of diversity, akin to other powerful domains like finance and law, my prior encounters have equipped me to confront preconceived notions and navigate intricate dynamics. Leveraging my multifaceted background spanning academia, industry, government, and civil society, I navigate these spaces with a nuanced perspective and a proactive approach.

What are the critical considerations for AI users?

AI users should prioritize understanding the functionalities and limitations of different AI applications and models, alongside acknowledging the evolving legal landscape governing AI deployment. Public discourse often oversimplifies the nuances of AI applications, leading to misconceptions about their capabilities and associated risks. Furthermore, the legal framework surrounding AI is still evolving, with existing laws applying in varying capacities to AI use. Clarity on legal ambiguities and potential liabilities is crucial, necessitating a deeper understanding of the current legal landscape to address unresolved issues effectively.

What constitutes responsible AI development?

Responsible AI development presents a complex challenge, as foundational principles like fairness and safety are inherently subjective and lack universal definitions. The absence of shared norms complicates the evaluation of responsible AI practices, potentially leading to inadvertent harm or misuse. To navigate this ambiguity, establishing clear principles, policies, and governance frameworks for responsible AI development is paramount. Internal oversight mechanisms, benchmarking practices, and adherence to defined standards can collectively drive responsible AI initiatives until global standards or shared frameworks emerge.

How can investors advocate for responsible AI practices?

Investors play a pivotal role in fostering responsible AI practices by delineating and enforcing standards for AI development and deployment. Currently, terms like “responsible” or “trustworthy” AI lack standardized evaluation criteria, rendering them subjective. Investors can incentivize AI actors to prioritize human values and societal welfare by actively monitoring and addressing discrepancies in AI practices. While emerging regulations like the EU AI Act aim to introduce governance requirements, investor engagement remains crucial in promoting ethical AI practices and aligning incentives with societal values.

Visited 2 times, 1 visit(s) today
Tags: , Last modified: February 25, 2024
Close Search Window
Close