top of page
Staff

Microsoft Temporarily Restricts Employee Access to ChatGPT Amid Security Concerns

On Thursday, Microsoft temporarily blocked its employees from using ChatGPT, a popular product from OpenAI, in which the company has invested billions. This decision was noted in an internal update due to concerns about security and data.


Microsoft emphasized the need for caution when using external AI services like ChatGPT, Midjourney, or Replika, citing privacy and security risks. Despite this, the block on ChatGPT and Canva, another design software, was later retracted from the advisory.


Microsoft clarified the situation, stating the restriction was an unintentional result of testing systems for managing large language models (LLMs), which was promptly corrected. The company encourages the use of Bing Chat Enterprise and ChatGPT Enterprise for enhanced privacy and security.


This incident highlights a growing trend among large companies to limit ChatGPT use to protect sensitive data. ChatGPT, trained on vast internet data, has amassed over 100 million users for its human-like chat responses.


Microsoft advises its employees to use Bing Chat, powered by OpenAI's AI models. The company continues to integrate OpenAI's services into its products, such as Windows and Office, running on Microsoft's Azure cloud.


The news comes amidst rumors denied by OpenAI's Sam Altman of blocking Microsoft 365, and a recent attack on ChatGPT by a group called Anonymous Sudan over OpenAI's alleged ties with Israel. Earlier, Microsoft had advised employees to use ChatGPT with caution regarding confidential information.

bottom of page