Written by 1:41 am AI, Latest news

– State-sponsored Cyber Attackers Exploiting Microsoft’s Artificial Intelligence Tools

A Microsoft report says online attackers, or hackers, are using large language models like OpenAI’s…

Microsoft has disclosed that state-sponsored online attackers affiliated with Russia, China, and Iran have been leveraging its OpenAI tools to potentially deceive targets and gather sensitive information.

According to a report released by Microsoft on Wednesday, the company has been monitoring online attackers, including various hacking groups collaborating with state entities such as Russia’s military Intelligence, Iran’s Revolutionary Guard, and the governments of China and North Korea. These hackers have been enhancing their operations by utilizing sophisticated language models like OpenAI’s ChatGPT, which employ artificial intelligence to generate human-like text based on vast amounts of online data.

In response to these findings, Microsoft has announced its decision to prohibit state-backed hacking groups from accessing its AI products. Despite any potential legal or terms of service violations, Microsoft’s Vice President for Customer Security, Tom Burt, emphasized the company’s stance on restricting access to such technology.

While diplomatic officials from Russia, North Korea, and Iran have yet to comment on these allegations, China’s U.S. embassy spokesperson, Liu Pengyu, expressed opposition to unfounded attacks and advocated for the responsible use of AI technology for the benefit of all.

The revelation that state-sponsored hackers have been utilizing AI tools for espionage purposes is expected to raise concerns about the misuse and proliferation of such technology. Security experts in Western countries have been issuing warnings about malicious actors exploiting AI capabilities since last year.

Both OpenAI and Microsoft characterized the hackers’ utilization of their AI tools as being in the early stages and incremental. Burt mentioned that there have been no significant successes reported by these online spies and described their usage of large language models for various malicious activities.

Microsoft highlighted specific instances where hacking groups associated with the Russian military intelligence agency, GRU, were researching military technologies related to operations in Ukraine using these models. North Korean hackers were found using the models to generate deceptive content, while Iranian hackers aimed to craft convincing emails to deceive feminist leaders.

Furthermore, Chinese state-backed hackers were observed experimenting with large language models by probing about intelligence agencies, cybersecurity issues, and prominent individuals. The extent of this activity and the number of banned users remain undisclosed.

Burt defended the decision to ban hacking groups from accessing AI tools, emphasizing the novelty and potency of this technology. He underscored the need for vigilance due to the significant capabilities of AI.

The article was originally reported by Raphael Satter for Reuters and adapted by Gregory Stachel for VOA Learning English.

I’m Gena Bennett.

Visited 2 times, 1 visit(s) today
Tags: , Last modified: February 19, 2024
Close Search Window
Close