Written by 12:52 pm AI

– Microsoft temporarily restricts ChatGPT access for security reasons

Blocked from using OpenAI’s ChatGPT by mistake?

70% of the participants in the Microsoft Work Index Report expressed readiness to embrace technology and incorporate it into their workflows for handling routine tasks.

It is evident that Microsoft employees leverage AI for task completion within the organization, especially following its substantial investment in collaboration with OpenAI. However, a recent report from CNBC indicated that Microsoft staff faced restrictions in using ChatGPT on a particular Thursday.

Sources familiar with the matter revealed that Microsoft opted to temporarily restrict access to the AI-powered tool due to concerns regarding security and data integrity. In response to the issue, Microsoft released the following statement:

“While Microsoft has invested in OpenAI and ChatGPT incorporates safeguards to prevent misuse, it remains a third-party tool. As a precautionary measure to safeguard privacy and security, users are advised to exercise caution when utilizing it. This caution extends to other AI providers like Midjourney or Replika.”

Microsoft acknowledged to CNBC that the company conducted tests on large language models, inadvertently triggering access restrictions to ChatGPT. The services were promptly restored once the error was identified. Microsoft reiterated its recommendation for employees and clients to opt for services offering higher levels of privacy and security, such as Bing Chat Enterprise and ChatGPT Enterprise.

Microsoft’s proactive approach underscores the significant concerns surrounding the security and privacy implications of AI technology. While many of these concerns were addressed in an Executive Order by President Biden, additional safeguards and intricate measures are imperative to prevent the misuse of advanced AI systems.

This development comes in the wake of a ChatGPT disruption caused by a DDoS attack, preventing users from fully leveraging the chatbot’s functionalities and resulting in error messages.

Microsoft’s Vigilance on Artificial Security

A cybersecurity firm’s report in June revealed that over the past year, more than 100,000 ChatGPT credentials were traded on illicit online platforms. The security firm highlighted that threat actors employed information-stealing ransomware to acquire these credentials, emphasizing the importance of regular password changes to deter cyber threats.

Another report indicated that hackers are increasingly leveraging sophisticated techniques, including conceptual AI, to orchestrate malicious attacks on unsuspecting users. Given these emerging threats, Microsoft’s decision to restrict the use of AI-powered tools, particularly in light of security and privacy concerns, appears justified.

Share your thoughts on security measures and AI ethics in the comments section.

Visited 2 times, 1 visit(s) today
Last modified: December 25, 2023
Close Search Window
Close