OPENAI could be legally required to produce sensitive information and documents shared with your Chatgpt artificial intelligence chatbot, warns the CEO of Openai Sam Altman.
Altman underlined the confidentiality gap as an “huge problem” during an interview with the Podcaster Theo von last week, revealing that, unlike the conversations with therapists, lawyers or doctors with legal protections, conversations with Chatgpt do not currently have such protections.
“And for the moment, if you speak to a therapist or a lawyer or a doctor of these problems, there is a legal privilege for that … and we have not yet understood it when you speak to Chatgpt.”
He added that if you speak to Chatgpt about “your most sensitive things” and there is then a trial, “we could be required to produce this”.
Altman’s comments are involved in a context of increased AI use for psychological support, medical and financial advice.
“I think it’s very screwed up,” said Altman, adding that “we should have the same concept of intimacy for your conversations with AI that we do with a therapist or other.”
Lack of a legal framework for AI
Altman also expressed the need for a legal policy for AI, saying that it is a “huge problem”.
“This is one of the reasons why I am sometimes afraid of using certain things of AI because I don’t know how much personal information I want to put, because I don’t know who will have it.”
In relation: Openai ignored the experts when he published a chatpt too pleasant
He believes that there should be the same concept of privacy for AI conversations, as there are therapists or doctors, and the political decision -makers with whom he has spoken to agree that this must be resolved and requires rapid action.
Broader surnior surnees
Altman also expressed his concerns about greater surveillance from ACA accelerated adoption in the world.
“I fear that the more we have in the world of AI, the more the surveillance of the world will want,” he said, because governments will want to ensure that people do not use technology for terrorism or harmful ends.
He said that for this reason, privacy did not have to be absolute, and he was “totally willing to compromise a certain intimacy for collective security”, but there was a warning.
“History is that the government is going too far, and I’m really nervous about it.”
Review: An increasing number of users take LSD with chatgpt: AI EYE