Written by 1:49 pm AI, Education

– Sarah Kreps: Cornell University’s AI Specialist and Political Science Expert

Sarah Kreps, profiled as part of TechCrunch’s series on women in AI, is a professor of govern…

TechCrunch is unveiling a series of interviews that spotlight prominent women who have significantly contributed to the Artificial rebellion, shining a long-overdue spotlight on academics and other scientists. As the AI landscape evolves, a series of pieces will be published to showcase important aspects that often escape notice. Below are further details for exploration.

Sarah Kreps, a social scientist and U.S. Air Force veteran, is a dedicated researcher focusing on U.S. foreign and security policy. She currently serves as an adjunct professor of law at Cornell University, an alternate professor at West Point’s Modern War Institute, and a professor of state at Cornell University.

Kreps’ recent research delves into the advantages and disadvantages of AI technologies, such as OpenAI’s GPT-4, particularly within the realm of politics. In a recent opinion piece for The Guardian, she highlighted the escalating investment in AI, predicting the emergence of an AI arms race not only among businesses but also nations. She expressed concerns about the increasing complexity of the AI governance challenge.

Q&A

How did you first venture into the realm of AI? What specifically captivated you about this field?

My journey into the realm of emerging technologies with implications for national security commenced during my tenure as an Air Force officer, coinciding with the deployment of the Predator aircraft. With a background in advanced radar and dish systems, it was a natural progression for me to delve into this domain. Transitioning to a Ph.D. program, my focus shifted to understanding the impacts of new innovations on national security. As I delved into drone-related research, I observed a prevalent discourse on autonomy, inherently entwined with artificial intelligence.

In 2018, I participated in an AI workshop at a D.C. think tank, where OpenAI showcased their groundbreaking GPT-2 model. Against the backdrop of the 2016 election and foreign election interference, I became acutely aware of the potential for generating widespread propaganda and leveraging microtargeting to influence voter beliefs more effectively. This led me to investigate the reliability of GPT-2 and GPT-3 as generators of social content, particularly in the context of potential misuse. Through extensive field trials, I explored the impact of these technologies on our political landscape, unearthing new challenges to our democratic systems.

What achievements in the Artificial domain are you most proud of?

I take immense pride in conducting a groundbreaking field test that underscored the disruptive potential of shaping legislative agendas. This endeavor was unparalleled at the time and shed light on the transformative power of AI in policy-making.

Additionally, I collaborated with Cornell computer science students on an application aimed at streamlining constituents’ responses to parliamentary emails. Prior to the advent of ChatGPT, our AI-driven initiative sought to alleviate the burden on staff members tasked with engaging with the public. While our project was not released, it underscored the intersection of AI, public engagement, and ethical considerations, paving the way for subsequent advancements like ChatGPT.

How do you navigate the challenges of the male-dominated AI industry and technology sector?

Fortunately, my research journey has been relatively unmarred by gender-related obstacles. However, I have observed pervasive stereotypes, notably during my interactions in the Bay Area. To overcome such challenges, I advocate for seeking mentors, honing skills, embracing challenges, and cultivating resilience, irrespective of gender.

What advice do you have for individuals aspiring to enter the field of artificial intelligence?

I believe there are abundant opportunities for women in AI, contingent upon their confidence, knowledge acquisition, and perseverance.

What are some of the key challenges confronting AI currently?

One pressing concern is the proliferation of research initiatives fixated on “superalignment,” which often sidestep critical questions regarding the values AI should align with. Instances like the flawed rollout of Google Gemini underscore the repercussions of aligning AI models with specific developer values, inadvertently shaping societal perceptions and influencing critical issues. These developments underscore the need for a more nuanced approach to AI ethics and governance.

What issues should AI practitioners be mindful of?

As AI permeates various facets of society, users must adopt a discerning approach towards AI-generated content. While skepticism is warranted, it is essential to verify information from multiple sources before accepting it as factual. The prevalence of propaganda and misinformation underscores the importance of critical information evaluation.

What constitutes the most ethical approach to developing artificial intelligence?

In a recent publication for the Bulletin of the Atomic Scientists, I underscored the need for responsible AI development by critiquing prevailing paradigms and proposing actionable steps for ethical AI advancement. By scrutinizing existing practices and advocating for conscientious decision-making, we can steer AI development towards more ethical and socially responsible outcomes.

Visited 3 times, 1 visit(s) today
Tags: , Last modified: March 9, 2024
Close Search Window
Close