-Advertisement-

OpenAI’s decision to monitor ChatGPT conversations, as detailed in a company blog post, marks a significant shift in how user data is handled. The monitoring focuses on identifying chats where there are indications of violence or potential harm. When a serious threat is identified, the company’s special review team is authorized to share the relevant chat data with law enforcement agencies. This action has raised concerns regarding user privacy, as the understanding was previously that conversations with ChatGPT were private. This change reflects a heightened focus on AI safety and the potential for AI-assisted harm. Accounts can be banned and law enforcement contacted if serious threats are detected.







