Craig Martell, the Pentagon’s chief modern and artificial intelligence officer, has expressed concerns about the potential for ChatGPT to disseminate misleading and false information. During the August DefCon agreement, he delivered a compelling presentation on technology, highlighting the importance of trustworthy AI.
Before assuming his current role, Martell, a non-military data scientist, spearheaded machine learning initiatives at prominent organizations such as LinkedIn, Dropbox, and Lyft.
In an increasingly volatile world where nations vie to develop autonomous weapons of mass destruction, extracting actionable data from the U.S. military poses challenges in discerning the reliability of AI for combat applications.
The conversation has been edited for clarity and brevity.
What is your primary objective?
A: Our primary aim is to shift the advantage from the battlefield to the boardroom. It is our duty to develop tools, processes, systems, and policies that empower the entire department to adapt across various missions, rather than focusing solely on individual tasks.
So, is the goal to achieve global dominance through knowledge? What is essential for success?
A: We are currently addressing network-centric warfare, focusing on delivering accurate information to the right place at the right time. The foundation lies in high-quality information, followed by analytics, metrics, and ultimately AI.
How should we approach the integration of AI in military applications?
A: AI essentially leverages past data to predict future outcomes. The current wave of AI is not fundamentally different from this premise.
Pentagon officials suggest that AI advancement is crucial due to the perceived threat from China. Is China leading in the race for AI weaponry?
A: Comparing AI development to a nuclear arms race oversimplifies the complexity of the technology. AI is a diverse set of tools used empirically to assess their effectiveness, rather than a singular unified system like nuclear arms.
The U.S. government is leveraging AI to support Ukraine. How can your team contribute?
A: Our involvement, under the codename Skyblue, focuses on organizing allies’ support efforts rather than direct engagement with Ukraine.
The debate on autonomous weapons, such as attack drones, raises concerns about human oversight. What is your perspective on this issue?
A: In military applications, technology serves as a tool to enhance capabilities while understanding its limitations. Confidence in a system’s functionality dictates its deployment. For instance, I trust the dynamic features of boat control but lack confidence in road-switching technologies.
The Air Force’s “loyal companion” system aims to enable drones to accompany manned fighter jets. Can machine perception distinguish between friend and foe?
A: Recent advancements in machine vision present experimental challenges in discerning friend from foe. Tailoring systems to specific use cases and setting performance standards are crucial for effective deployment.
Your research involves large-language models and generative AI. When will the Department of Defense implement these technologies?
A: Task Force Lima, initiated in August, explores over 160 potential applications of large-language models. Emphasizing minimal risk and security, we aim to assess scenarios where such technologies can be effectively utilized.
Recruiting and retaining skilled professionals for AI tasks pose challenges due to salary differentials. How significant is this concern for the Pentagon?
A: Addressing talent retention, the Pentagon is exploring innovative talent management strategies to attract individuals with shorter commitments. Initiatives such as partnering with historically Black colleges and universities aim to diversify recruitment efforts and bridge skill gaps effectively.