The decision on how transparent the U.S. should be regarding public access to artificial intelligence (AI) will significantly impact data protection policies. This issue arose after Microsoft disclosed that state actors from rival nations utilized AI to train their operatives.
Phil Siegel, the founder of the AI non-profit Center for Advanced Preparedness and Threat Response Simulation, emphasized the need to choose between maintaining open access to AI for everyone, including both malicious and benevolent actors, or adopting a different approach.
OpenAI, in a recent blog post, identified five state-affiliated “malicious” actors, including Chinese-affiliated Charcoal Typhoon and Salmon Typhoon, Iranian-affiliated Crimson, North Korean-affiliated Sandstorm, and Russian-affiliated Emerald Sleet and Forest Blizzard. These groups allegedly leveraged OpenAI services for tasks such as querying open-source information, translation, error detection in code, and basic coding activities.
To address these threats, OpenAI proposed a multi-faceted strategy involving enhanced monitoring and disruption of malicious activities, increased collaboration with other AI platforms, and improved transparency to the public.
Sam Altman, the CEO of OpenAI, stressed the importance of continuous innovation, collaboration, and information sharing to thwart malicious actors within the digital ecosystem.
However, Siegel expressed doubts about the effectiveness of these measures, citing the current lack of infrastructure and regulations necessary to combat such threats effectively. He highlighted the need for a structured approach similar to the banking system, which has established mechanisms to prevent illicit activities.
Microsoft also advocated for additional measures, such as notifications to other AI service providers to flag suspicious activities promptly. Collaborative efforts between Microsoft, OpenAI, and MITRE aim to safeguard AI systems and develop countermeasures against evolving cyber threats.
Despite these initiatives, Siegel cautioned that existing processes might not cover the full extent of malicious activities, as hackers could employ sophisticated techniques beyond current detection capabilities.
In conclusion, while strides are being made to enhance AI security, the involvement of government agencies and regulatory bodies is deemed crucial to address the evolving landscape of AI-related threats effectively.