While millions of people are turning to ChatGPT for psychological advice, emotional sharing or relaxation, recently, OpenAI CEO Sam altman issued a noteworthy warning: conversations with ChatGPT are not as legally confidential as many people mistakenly believe.
Speaking in a recent episode of the podcast This past weekend hosted by Von Altman, altman admitted that there is currently no clear enough legal framework to protect privacy in conversations between users and AI chatbots.
If you talk to a doctor, lawyer or therapist, that relationship is protected by law. But not with ChatGPT, he said.
The problem becomes more serious as AI is becoming an increasingly popular companion, especially among young people. From resolving conflicts in relationships to serious psychological problems, users are accidentally putting their trust in a platform that is not fully protected by law.
If there is a legal dispute, we could be asked to provide user information, and I think that is a bad thing, altman stressed.
Unlike end-to-end encrypted applications such as WhatsApp or Signal, chat data with ChatGPT can be stored, accessed and used by OpenAI to train models or censor violations.
Although the company has committed to deleting free conversations within 30 days, they can still be retained for legal reasons, especially in cases related to security or legal disputes.
This reality raises serious questions about privacy and ethics in using AI as a therapy tool or emotional consultation.
As the line between technology and personal life becomes increasingly blurred, users need to consider carefully before sharing sensitive information with chatbots, no matter how much they listen to you.